Test Report: KVM_Linux_crio 19346

                    
                      a97ed275d9afb14524a68c67a981a32c27d545ab:2024-07-30:35563
                    
                

Test fail (14/233)

x
+
TestAddons/parallel/Ingress (145.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-091578 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-091578 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-091578 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5856dac7-4a90-4abe-aebd-099d4478d1a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5856dac7-4a90-4abe-aebd-099d4478d1a4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003968439s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-091578 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.237347054s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-091578 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.214
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable ingress --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-091578 -n addons-091578
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-091578 logs -n 25: (1.534222499s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-800416                                                                     | download-only-800416 | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC | 30 Jul 24 00:06 UTC |
	| delete  | -p download-only-232646                                                                     | download-only-232646 | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC | 30 Jul 24 00:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-248146 | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC |                     |
	|         | binary-mirror-248146                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38989                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-248146                                                                     | binary-mirror-248146 | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC | 30 Jul 24 00:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC |                     |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC |                     |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-091578 --wait=true                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC | 30 Jul 24 00:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:13 UTC | 30 Jul 24 00:13 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:13 UTC | 30 Jul 24 00:14 UTC |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-091578 ssh cat                                                                       | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | /opt/local-path-provisioner/pvc-f03646c2-17c5-467c-9078-e8eb4c5ef372_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-091578 ip                                                                            | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | -p addons-091578                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | -p addons-091578                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-091578 ssh curl -s                                                                   | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-091578 addons                                                                        | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-091578 addons                                                                        | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-091578 ip                                                                            | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:16 UTC | 30 Jul 24 00:16 UTC |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:16 UTC | 30 Jul 24 00:16 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:16 UTC | 30 Jul 24 00:16 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:06:04
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:06:04.067602  503585 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:06:04.067877  503585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:06:04.067887  503585 out.go:304] Setting ErrFile to fd 2...
	I0730 00:06:04.067892  503585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:06:04.068081  503585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:06:04.068780  503585 out.go:298] Setting JSON to false
	I0730 00:06:04.069698  503585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6506,"bootTime":1722291458,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:06:04.069760  503585 start.go:139] virtualization: kvm guest
	I0730 00:06:04.071938  503585 out.go:177] * [addons-091578] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:06:04.073318  503585 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:06:04.073379  503585 notify.go:220] Checking for updates...
	I0730 00:06:04.075971  503585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:06:04.077422  503585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:06:04.078580  503585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:06:04.079815  503585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:06:04.080994  503585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:06:04.082458  503585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:06:04.114825  503585 out.go:177] * Using the kvm2 driver based on user configuration
	I0730 00:06:04.116132  503585 start.go:297] selected driver: kvm2
	I0730 00:06:04.116145  503585 start.go:901] validating driver "kvm2" against <nil>
	I0730 00:06:04.116158  503585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:06:04.116959  503585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:06:04.117062  503585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:06:04.133091  503585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:06:04.133148  503585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 00:06:04.133403  503585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:06:04.133475  503585 cni.go:84] Creating CNI manager for ""
	I0730 00:06:04.133493  503585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:06:04.133506  503585 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0730 00:06:04.133582  503585 start.go:340] cluster config:
	{Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:06:04.133725  503585 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:06:04.135650  503585 out.go:177] * Starting "addons-091578" primary control-plane node in "addons-091578" cluster
	I0730 00:06:04.136863  503585 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:06:04.136902  503585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:06:04.136917  503585 cache.go:56] Caching tarball of preloaded images
	I0730 00:06:04.137035  503585 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:06:04.137049  503585 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:06:04.138283  503585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/config.json ...
	I0730 00:06:04.138332  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/config.json: {Name:mka41c8a1a5a7058f81c0b1b0ebe27d61d42132f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:04.138502  503585 start.go:360] acquireMachinesLock for addons-091578: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:06:04.138557  503585 start.go:364] duration metric: took 35.748µs to acquireMachinesLock for "addons-091578"
	I0730 00:06:04.138577  503585 start.go:93] Provisioning new machine with config: &{Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:06:04.138684  503585 start.go:125] createHost starting for "" (driver="kvm2")
	I0730 00:06:04.140460  503585 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0730 00:06:04.140601  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:04.140634  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:04.155348  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0730 00:06:04.155869  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:04.156429  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:04.156449  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:04.156812  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:04.157001  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:04.157263  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:04.157457  503585 start.go:159] libmachine.API.Create for "addons-091578" (driver="kvm2")
	I0730 00:06:04.157486  503585 client.go:168] LocalClient.Create starting
	I0730 00:06:04.157522  503585 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:06:04.269943  503585 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:06:04.406430  503585 main.go:141] libmachine: Running pre-create checks...
	I0730 00:06:04.406456  503585 main.go:141] libmachine: (addons-091578) Calling .PreCreateCheck
	I0730 00:06:04.407044  503585 main.go:141] libmachine: (addons-091578) Calling .GetConfigRaw
	I0730 00:06:04.407624  503585 main.go:141] libmachine: Creating machine...
	I0730 00:06:04.407644  503585 main.go:141] libmachine: (addons-091578) Calling .Create
	I0730 00:06:04.407954  503585 main.go:141] libmachine: (addons-091578) Creating KVM machine...
	I0730 00:06:04.409192  503585 main.go:141] libmachine: (addons-091578) DBG | found existing default KVM network
	I0730 00:06:04.411724  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:04.409985  503607 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002046d0}
	I0730 00:06:04.411756  503585 main.go:141] libmachine: (addons-091578) DBG | created network xml: 
	I0730 00:06:04.411779  503585 main.go:141] libmachine: (addons-091578) DBG | <network>
	I0730 00:06:04.411794  503585 main.go:141] libmachine: (addons-091578) DBG |   <name>mk-addons-091578</name>
	I0730 00:06:04.411811  503585 main.go:141] libmachine: (addons-091578) DBG |   <dns enable='no'/>
	I0730 00:06:04.411827  503585 main.go:141] libmachine: (addons-091578) DBG |   
	I0730 00:06:04.411843  503585 main.go:141] libmachine: (addons-091578) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0730 00:06:04.411857  503585 main.go:141] libmachine: (addons-091578) DBG |     <dhcp>
	I0730 00:06:04.411880  503585 main.go:141] libmachine: (addons-091578) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0730 00:06:04.411892  503585 main.go:141] libmachine: (addons-091578) DBG |     </dhcp>
	I0730 00:06:04.411912  503585 main.go:141] libmachine: (addons-091578) DBG |   </ip>
	I0730 00:06:04.411928  503585 main.go:141] libmachine: (addons-091578) DBG |   
	I0730 00:06:04.411982  503585 main.go:141] libmachine: (addons-091578) DBG | </network>
	I0730 00:06:04.412023  503585 main.go:141] libmachine: (addons-091578) DBG | 
	I0730 00:06:04.416798  503585 main.go:141] libmachine: (addons-091578) DBG | trying to create private KVM network mk-addons-091578 192.168.39.0/24...
	I0730 00:06:04.494036  503585 main.go:141] libmachine: (addons-091578) DBG | private KVM network mk-addons-091578 192.168.39.0/24 created
	I0730 00:06:04.494075  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:04.494007  503607 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:06:04.494090  503585 main.go:141] libmachine: (addons-091578) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578 ...
	I0730 00:06:04.494134  503585 main.go:141] libmachine: (addons-091578) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:06:04.494250  503585 main.go:141] libmachine: (addons-091578) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:06:04.804358  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:04.804213  503607 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa...
	I0730 00:06:05.032440  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:05.032279  503607 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/addons-091578.rawdisk...
	I0730 00:06:05.032472  503585 main.go:141] libmachine: (addons-091578) DBG | Writing magic tar header
	I0730 00:06:05.032483  503585 main.go:141] libmachine: (addons-091578) DBG | Writing SSH key tar header
	I0730 00:06:05.032491  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:05.032406  503607 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578 ...
	I0730 00:06:05.032507  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578
	I0730 00:06:05.032589  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:06:05.032608  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:06:05.032617  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578 (perms=drwx------)
	I0730 00:06:05.032667  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:06:05.032692  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:06:05.032701  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:06:05.032740  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:06:05.032749  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:06:05.032758  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:06:05.032769  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:06:05.032786  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:06:05.032792  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home
	I0730 00:06:05.032800  503585 main.go:141] libmachine: (addons-091578) DBG | Skipping /home - not owner
	I0730 00:06:05.032809  503585 main.go:141] libmachine: (addons-091578) Creating domain...
	I0730 00:06:05.033700  503585 main.go:141] libmachine: (addons-091578) define libvirt domain using xml: 
	I0730 00:06:05.033727  503585 main.go:141] libmachine: (addons-091578) <domain type='kvm'>
	I0730 00:06:05.033738  503585 main.go:141] libmachine: (addons-091578)   <name>addons-091578</name>
	I0730 00:06:05.033750  503585 main.go:141] libmachine: (addons-091578)   <memory unit='MiB'>4000</memory>
	I0730 00:06:05.033762  503585 main.go:141] libmachine: (addons-091578)   <vcpu>2</vcpu>
	I0730 00:06:05.033773  503585 main.go:141] libmachine: (addons-091578)   <features>
	I0730 00:06:05.033783  503585 main.go:141] libmachine: (addons-091578)     <acpi/>
	I0730 00:06:05.033794  503585 main.go:141] libmachine: (addons-091578)     <apic/>
	I0730 00:06:05.033803  503585 main.go:141] libmachine: (addons-091578)     <pae/>
	I0730 00:06:05.033811  503585 main.go:141] libmachine: (addons-091578)     
	I0730 00:06:05.033820  503585 main.go:141] libmachine: (addons-091578)   </features>
	I0730 00:06:05.033833  503585 main.go:141] libmachine: (addons-091578)   <cpu mode='host-passthrough'>
	I0730 00:06:05.033858  503585 main.go:141] libmachine: (addons-091578)   
	I0730 00:06:05.033886  503585 main.go:141] libmachine: (addons-091578)   </cpu>
	I0730 00:06:05.033899  503585 main.go:141] libmachine: (addons-091578)   <os>
	I0730 00:06:05.033909  503585 main.go:141] libmachine: (addons-091578)     <type>hvm</type>
	I0730 00:06:05.033920  503585 main.go:141] libmachine: (addons-091578)     <boot dev='cdrom'/>
	I0730 00:06:05.033930  503585 main.go:141] libmachine: (addons-091578)     <boot dev='hd'/>
	I0730 00:06:05.033942  503585 main.go:141] libmachine: (addons-091578)     <bootmenu enable='no'/>
	I0730 00:06:05.033955  503585 main.go:141] libmachine: (addons-091578)   </os>
	I0730 00:06:05.033967  503585 main.go:141] libmachine: (addons-091578)   <devices>
	I0730 00:06:05.033979  503585 main.go:141] libmachine: (addons-091578)     <disk type='file' device='cdrom'>
	I0730 00:06:05.033996  503585 main.go:141] libmachine: (addons-091578)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/boot2docker.iso'/>
	I0730 00:06:05.034009  503585 main.go:141] libmachine: (addons-091578)       <target dev='hdc' bus='scsi'/>
	I0730 00:06:05.034020  503585 main.go:141] libmachine: (addons-091578)       <readonly/>
	I0730 00:06:05.034032  503585 main.go:141] libmachine: (addons-091578)     </disk>
	I0730 00:06:05.034045  503585 main.go:141] libmachine: (addons-091578)     <disk type='file' device='disk'>
	I0730 00:06:05.034057  503585 main.go:141] libmachine: (addons-091578)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:06:05.034071  503585 main.go:141] libmachine: (addons-091578)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/addons-091578.rawdisk'/>
	I0730 00:06:05.034084  503585 main.go:141] libmachine: (addons-091578)       <target dev='hda' bus='virtio'/>
	I0730 00:06:05.034096  503585 main.go:141] libmachine: (addons-091578)     </disk>
	I0730 00:06:05.034110  503585 main.go:141] libmachine: (addons-091578)     <interface type='network'>
	I0730 00:06:05.034123  503585 main.go:141] libmachine: (addons-091578)       <source network='mk-addons-091578'/>
	I0730 00:06:05.034133  503585 main.go:141] libmachine: (addons-091578)       <model type='virtio'/>
	I0730 00:06:05.034144  503585 main.go:141] libmachine: (addons-091578)     </interface>
	I0730 00:06:05.034154  503585 main.go:141] libmachine: (addons-091578)     <interface type='network'>
	I0730 00:06:05.034166  503585 main.go:141] libmachine: (addons-091578)       <source network='default'/>
	I0730 00:06:05.034183  503585 main.go:141] libmachine: (addons-091578)       <model type='virtio'/>
	I0730 00:06:05.034196  503585 main.go:141] libmachine: (addons-091578)     </interface>
	I0730 00:06:05.034206  503585 main.go:141] libmachine: (addons-091578)     <serial type='pty'>
	I0730 00:06:05.034218  503585 main.go:141] libmachine: (addons-091578)       <target port='0'/>
	I0730 00:06:05.034227  503585 main.go:141] libmachine: (addons-091578)     </serial>
	I0730 00:06:05.034239  503585 main.go:141] libmachine: (addons-091578)     <console type='pty'>
	I0730 00:06:05.034254  503585 main.go:141] libmachine: (addons-091578)       <target type='serial' port='0'/>
	I0730 00:06:05.034265  503585 main.go:141] libmachine: (addons-091578)     </console>
	I0730 00:06:05.034276  503585 main.go:141] libmachine: (addons-091578)     <rng model='virtio'>
	I0730 00:06:05.034288  503585 main.go:141] libmachine: (addons-091578)       <backend model='random'>/dev/random</backend>
	I0730 00:06:05.034298  503585 main.go:141] libmachine: (addons-091578)     </rng>
	I0730 00:06:05.034315  503585 main.go:141] libmachine: (addons-091578)     
	I0730 00:06:05.034329  503585 main.go:141] libmachine: (addons-091578)     
	I0730 00:06:05.034340  503585 main.go:141] libmachine: (addons-091578)   </devices>
	I0730 00:06:05.034350  503585 main.go:141] libmachine: (addons-091578) </domain>
	I0730 00:06:05.034362  503585 main.go:141] libmachine: (addons-091578) 
	I0730 00:06:05.040130  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:39:46:41 in network default
	I0730 00:06:05.040662  503585 main.go:141] libmachine: (addons-091578) Ensuring networks are active...
	I0730 00:06:05.040683  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:05.041364  503585 main.go:141] libmachine: (addons-091578) Ensuring network default is active
	I0730 00:06:05.041696  503585 main.go:141] libmachine: (addons-091578) Ensuring network mk-addons-091578 is active
	I0730 00:06:05.042243  503585 main.go:141] libmachine: (addons-091578) Getting domain xml...
	I0730 00:06:05.042987  503585 main.go:141] libmachine: (addons-091578) Creating domain...
	I0730 00:06:06.436500  503585 main.go:141] libmachine: (addons-091578) Waiting to get IP...
	I0730 00:06:06.437312  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:06.437641  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:06.437698  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:06.437644  503607 retry.go:31] will retry after 227.017258ms: waiting for machine to come up
	I0730 00:06:06.666137  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:06.666655  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:06.666681  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:06.666605  503607 retry.go:31] will retry after 301.899156ms: waiting for machine to come up
	I0730 00:06:06.970087  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:06.970598  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:06.970629  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:06.970557  503607 retry.go:31] will retry after 460.750332ms: waiting for machine to come up
	I0730 00:06:07.433374  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:07.433754  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:07.433786  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:07.433734  503607 retry.go:31] will retry after 569.719068ms: waiting for machine to come up
	I0730 00:06:08.005647  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:08.005975  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:08.006000  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:08.005936  503607 retry.go:31] will retry after 581.777372ms: waiting for machine to come up
	I0730 00:06:08.589956  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:08.590436  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:08.590467  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:08.590380  503607 retry.go:31] will retry after 585.374235ms: waiting for machine to come up
	I0730 00:06:09.177619  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:09.178031  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:09.178051  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:09.177973  503607 retry.go:31] will retry after 766.103484ms: waiting for machine to come up
	I0730 00:06:09.945937  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:09.946347  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:09.946380  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:09.946295  503607 retry.go:31] will retry after 1.332810558s: waiting for machine to come up
	I0730 00:06:11.280861  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:11.281331  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:11.281381  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:11.281251  503607 retry.go:31] will retry after 1.162526253s: waiting for machine to come up
	I0730 00:06:12.445756  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:12.446085  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:12.446107  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:12.446057  503607 retry.go:31] will retry after 1.459502082s: waiting for machine to come up
	I0730 00:06:13.907851  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:13.908304  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:13.908335  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:13.908241  503607 retry.go:31] will retry after 2.725816137s: waiting for machine to come up
	I0730 00:06:16.637526  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:16.637961  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:16.637986  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:16.637933  503607 retry.go:31] will retry after 3.042906213s: waiting for machine to come up
	I0730 00:06:19.682038  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:19.682445  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:19.682478  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:19.682388  503607 retry.go:31] will retry after 3.206453248s: waiting for machine to come up
	I0730 00:06:22.892793  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:22.893130  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:22.893157  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:22.893071  503607 retry.go:31] will retry after 5.096569464s: waiting for machine to come up
	I0730 00:06:27.990936  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:27.991396  503585 main.go:141] libmachine: (addons-091578) Found IP for machine: 192.168.39.214
	I0730 00:06:27.991424  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has current primary IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:27.991431  503585 main.go:141] libmachine: (addons-091578) Reserving static IP address...
	I0730 00:06:27.991852  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find host DHCP lease matching {name: "addons-091578", mac: "52:54:00:f9:5f:c4", ip: "192.168.39.214"} in network mk-addons-091578
	I0730 00:06:28.066468  503585 main.go:141] libmachine: (addons-091578) DBG | Getting to WaitForSSH function...
	I0730 00:06:28.066499  503585 main.go:141] libmachine: (addons-091578) Reserved static IP address: 192.168.39.214
	I0730 00:06:28.066515  503585 main.go:141] libmachine: (addons-091578) Waiting for SSH to be available...
	I0730 00:06:28.068893  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.069376  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.069407  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.069509  503585 main.go:141] libmachine: (addons-091578) DBG | Using SSH client type: external
	I0730 00:06:28.069530  503585 main.go:141] libmachine: (addons-091578) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa (-rw-------)
	I0730 00:06:28.069589  503585 main.go:141] libmachine: (addons-091578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:06:28.069617  503585 main.go:141] libmachine: (addons-091578) DBG | About to run SSH command:
	I0730 00:06:28.069633  503585 main.go:141] libmachine: (addons-091578) DBG | exit 0
	I0730 00:06:28.192859  503585 main.go:141] libmachine: (addons-091578) DBG | SSH cmd err, output: <nil>: 
	I0730 00:06:28.193135  503585 main.go:141] libmachine: (addons-091578) KVM machine creation complete!
	I0730 00:06:28.193505  503585 main.go:141] libmachine: (addons-091578) Calling .GetConfigRaw
	I0730 00:06:28.194248  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:28.194457  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:28.194643  503585 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:06:28.194659  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:28.195898  503585 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:06:28.195916  503585 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:06:28.195924  503585 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:06:28.195933  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.198027  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.198411  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.198434  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.198583  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.198768  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.198900  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.199016  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.199181  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.199443  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.199455  503585 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:06:28.299976  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:06:28.300002  503585 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:06:28.300013  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.302905  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.303414  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.303446  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.303628  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.303843  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.304014  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.304178  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.304333  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.304507  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.304517  503585 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:06:28.405331  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:06:28.405411  503585 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:06:28.405419  503585 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:06:28.405428  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:28.405738  503585 buildroot.go:166] provisioning hostname "addons-091578"
	I0730 00:06:28.405776  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:28.406024  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.408913  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.409647  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.410036  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.410314  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.410510  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.410671  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.410805  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.411000  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.411182  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.411196  503585 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-091578 && echo "addons-091578" | sudo tee /etc/hostname
	I0730 00:06:28.525849  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-091578
	
	I0730 00:06:28.525877  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.528740  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.529021  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.529052  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.529217  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.529428  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.529631  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.529787  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.529964  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.530204  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.530230  503585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-091578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-091578/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-091578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:06:28.636658  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:06:28.636691  503585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:06:28.636732  503585 buildroot.go:174] setting up certificates
	I0730 00:06:28.636759  503585 provision.go:84] configureAuth start
	I0730 00:06:28.636775  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:28.637096  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:28.639919  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.640232  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.640254  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.640386  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.642677  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.643167  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.643188  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.643331  503585 provision.go:143] copyHostCerts
	I0730 00:06:28.643438  503585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:06:28.643556  503585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:06:28.643623  503585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:06:28.643671  503585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.addons-091578 san=[127.0.0.1 192.168.39.214 addons-091578 localhost minikube]
	I0730 00:06:28.865726  503585 provision.go:177] copyRemoteCerts
	I0730 00:06:28.865802  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:06:28.865830  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.869004  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.869295  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.869328  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.869460  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.869676  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.869842  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.869975  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:28.951040  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:06:28.975965  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 00:06:29.000305  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:06:29.024340  503585 provision.go:87] duration metric: took 387.555523ms to configureAuth
	I0730 00:06:29.024372  503585 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:06:29.024590  503585 config.go:182] Loaded profile config "addons-091578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:06:29.024724  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.027324  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.027632  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.027659  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.027776  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.028011  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.028167  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.028336  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.028502  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:29.028669  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:29.028682  503585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:06:29.275401  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:06:29.275432  503585 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:06:29.275442  503585 main.go:141] libmachine: (addons-091578) Calling .GetURL
	I0730 00:06:29.276678  503585 main.go:141] libmachine: (addons-091578) DBG | Using libvirt version 6000000
	I0730 00:06:29.278812  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.279186  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.279242  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.279266  503585 main.go:141] libmachine: Docker is up and running!
	I0730 00:06:29.279285  503585 main.go:141] libmachine: Reticulating splines...
	I0730 00:06:29.279293  503585 client.go:171] duration metric: took 25.121799468s to LocalClient.Create
	I0730 00:06:29.279319  503585 start.go:167] duration metric: took 25.121864048s to libmachine.API.Create "addons-091578"
	I0730 00:06:29.279330  503585 start.go:293] postStartSetup for "addons-091578" (driver="kvm2")
	I0730 00:06:29.279340  503585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:06:29.279358  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.279620  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:06:29.279645  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.281915  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.282188  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.282217  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.282435  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.282716  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.282933  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.283068  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:29.362884  503585 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:06:29.366734  503585 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:06:29.366766  503585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:06:29.366864  503585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:06:29.366898  503585 start.go:296] duration metric: took 87.56036ms for postStartSetup
	I0730 00:06:29.366956  503585 main.go:141] libmachine: (addons-091578) Calling .GetConfigRaw
	I0730 00:06:29.367636  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:29.370387  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.370725  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.370757  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.370973  503585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/config.json ...
	I0730 00:06:29.371166  503585 start.go:128] duration metric: took 25.232469033s to createHost
	I0730 00:06:29.371193  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.373627  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.373955  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.373976  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.374128  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.374322  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.374509  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.374642  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.374836  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:29.375017  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:29.375029  503585 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:06:29.481225  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722297989.460630817
	
	I0730 00:06:29.481255  503585 fix.go:216] guest clock: 1722297989.460630817
	I0730 00:06:29.481267  503585 fix.go:229] Guest: 2024-07-30 00:06:29.460630817 +0000 UTC Remote: 2024-07-30 00:06:29.371178586 +0000 UTC m=+25.339019431 (delta=89.452231ms)
	I0730 00:06:29.481300  503585 fix.go:200] guest clock delta is within tolerance: 89.452231ms
	I0730 00:06:29.481306  503585 start.go:83] releasing machines lock for "addons-091578", held for 25.342740042s
	I0730 00:06:29.481331  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.481668  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:29.484292  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.484691  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.484739  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.484816  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.485351  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.485544  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.485647  503585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:06:29.485696  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.485804  503585 ssh_runner.go:195] Run: cat /version.json
	I0730 00:06:29.485821  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.488282  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.488476  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.488607  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.488635  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.488826  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.488840  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.488848  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.489010  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.489111  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.489207  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.489273  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.489353  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.489416  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:29.489465  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:29.561348  503585 ssh_runner.go:195] Run: systemctl --version
	I0730 00:06:29.595588  503585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:06:29.752489  503585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:06:29.758186  503585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:06:29.758272  503585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:06:29.774299  503585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:06:29.774331  503585 start.go:495] detecting cgroup driver to use...
	I0730 00:06:29.774408  503585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:06:29.790689  503585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:06:29.804473  503585 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:06:29.804541  503585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:06:29.817576  503585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:06:29.831051  503585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:06:29.938437  503585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:06:30.089799  503585 docker.go:233] disabling docker service ...
	I0730 00:06:30.089890  503585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:06:30.103656  503585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:06:30.115831  503585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:06:30.237644  503585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:06:30.354697  503585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:06:30.367512  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:06:30.384785  503585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:06:30.384847  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.394460  503585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:06:30.394528  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.404052  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.413692  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.423316  503585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:06:30.433261  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.443231  503585 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.459823  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.469807  503585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:06:30.478807  503585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:06:30.478880  503585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:06:30.492287  503585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:06:30.501783  503585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:06:30.616317  503585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:06:30.742290  503585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:06:30.742385  503585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:06:30.746811  503585 start.go:563] Will wait 60s for crictl version
	I0730 00:06:30.746886  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:06:30.750374  503585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:06:30.787626  503585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:06:30.787762  503585 ssh_runner.go:195] Run: crio --version
	I0730 00:06:30.813701  503585 ssh_runner.go:195] Run: crio --version
	I0730 00:06:30.841999  503585 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:06:30.843422  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:30.846100  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:30.846448  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:30.846478  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:30.846673  503585 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:06:30.850909  503585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:06:30.862451  503585 kubeadm.go:883] updating cluster {Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:06:30.862593  503585 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:06:30.862657  503585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:06:30.891616  503585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0730 00:06:30.891689  503585 ssh_runner.go:195] Run: which lz4
	I0730 00:06:30.895286  503585 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0730 00:06:30.899173  503585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0730 00:06:30.899206  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0730 00:06:32.017133  503585 crio.go:462] duration metric: took 1.121886601s to copy over tarball
	I0730 00:06:32.017222  503585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0730 00:06:34.221238  503585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.203977277s)
	I0730 00:06:34.221273  503585 crio.go:469] duration metric: took 2.20410772s to extract the tarball
	I0730 00:06:34.221285  503585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0730 00:06:34.258279  503585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:06:34.298516  503585 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:06:34.298543  503585 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:06:34.298552  503585 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.30.3 crio true true} ...
	I0730 00:06:34.298694  503585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-091578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:06:34.298763  503585 ssh_runner.go:195] Run: crio config
	I0730 00:06:34.341041  503585 cni.go:84] Creating CNI manager for ""
	I0730 00:06:34.341069  503585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:06:34.341087  503585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:06:34.341117  503585 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-091578 NodeName:addons-091578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:06:34.341290  503585 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-091578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:06:34.341369  503585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:06:34.350448  503585 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:06:34.350531  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 00:06:34.359125  503585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0730 00:06:34.375453  503585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:06:34.391743  503585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0730 00:06:34.408633  503585 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0730 00:06:34.412323  503585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:06:34.425133  503585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:06:34.543503  503585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:06:34.559696  503585 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578 for IP: 192.168.39.214
	I0730 00:06:34.559729  503585 certs.go:194] generating shared ca certs ...
	I0730 00:06:34.559753  503585 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.559942  503585 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:06:34.777287  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt ...
	I0730 00:06:34.777321  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt: {Name:mkb7ea0bad21ae509edda96159e2c7ea1e30c6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.777534  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key ...
	I0730 00:06:34.777553  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key: {Name:mk4e96af191191f480b46c042f1e27b6aeadd365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.777667  503585 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:06:34.996212  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt ...
	I0730 00:06:34.996245  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt: {Name:mk139a030973db209f8ffe3406c971813e95e901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.996422  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key ...
	I0730 00:06:34.996434  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key: {Name:mkc555616fa7470fab21853628568988b93ea51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.996504  503585 certs.go:256] generating profile certs ...
	I0730 00:06:34.996568  503585 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.key
	I0730 00:06:34.996582  503585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt with IP's: []
	I0730 00:06:35.240339  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt ...
	I0730 00:06:35.240373  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: {Name:mk9185df29d5fb509b2c24a719fe223587ce7578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.240551  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.key ...
	I0730 00:06:35.240562  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.key: {Name:mk44e6c060866c5d708c17c60140d362e29beee9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.240633  503585 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271
	I0730 00:06:35.240650  503585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I0730 00:06:35.485444  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271 ...
	I0730 00:06:35.485478  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271: {Name:mk2e174214ad821c70c65f7506c7e1bcfa80282d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.485667  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271 ...
	I0730 00:06:35.485690  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271: {Name:mk8edb370e6bc7cb67eb48b97217b15577bb8eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.485795  503585 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt
	I0730 00:06:35.485902  503585 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key
	I0730 00:06:35.485969  503585 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key
	I0730 00:06:35.485995  503585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt with IP's: []
	I0730 00:06:35.626274  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt ...
	I0730 00:06:35.626305  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt: {Name:mk36be54f25383cab0071dd0bffb7bb3c83d494d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.626499  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key ...
	I0730 00:06:35.626522  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key: {Name:mkeaff3af3d6f1e9defee6cc86036e50dd4f2e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.626737  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:06:35.626782  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:06:35.626822  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:06:35.626851  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:06:35.627517  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:06:35.651310  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:06:35.673486  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:06:35.702911  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:06:35.725540  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0730 00:06:35.748100  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:06:35.770433  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:06:35.797234  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:06:35.819464  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:06:35.842174  503585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:06:35.857908  503585 ssh_runner.go:195] Run: openssl version
	I0730 00:06:35.863488  503585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:06:35.873865  503585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:06:35.878111  503585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:06:35.878172  503585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:06:35.883866  503585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:06:35.894221  503585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:06:35.898198  503585 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:06:35.898275  503585 kubeadm.go:392] StartCluster: {Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:06:35.898354  503585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:06:35.898400  503585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:06:35.933302  503585 cri.go:89] found id: ""
	I0730 00:06:35.933385  503585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 00:06:35.942952  503585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 00:06:35.952117  503585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 00:06:35.961020  503585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 00:06:35.961066  503585 kubeadm.go:157] found existing configuration files:
	
	I0730 00:06:35.961115  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 00:06:35.970201  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 00:06:35.970266  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 00:06:35.979169  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 00:06:35.987909  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 00:06:35.987976  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 00:06:35.997111  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 00:06:36.006365  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 00:06:36.006436  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 00:06:36.015400  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 00:06:36.024005  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 00:06:36.024067  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 00:06:36.033057  503585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0730 00:06:36.087367  503585 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0730 00:06:36.087449  503585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 00:06:36.211093  503585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 00:06:36.211247  503585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 00:06:36.211394  503585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 00:06:36.426383  503585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 00:06:36.517863  503585 out.go:204]   - Generating certificates and keys ...
	I0730 00:06:36.517997  503585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 00:06:36.518109  503585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 00:06:36.588207  503585 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 00:06:36.674406  503585 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 00:06:36.733368  503585 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 00:06:36.824132  503585 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 00:06:37.052552  503585 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 00:06:37.052771  503585 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-091578 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0730 00:06:37.339834  503585 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 00:06:37.340047  503585 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-091578 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0730 00:06:37.474138  503585 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 00:06:37.566852  503585 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 00:06:37.689891  503585 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 00:06:37.690126  503585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 00:06:37.912398  503585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 00:06:38.088617  503585 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0730 00:06:38.149969  503585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 00:06:38.368471  503585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 00:06:38.532698  503585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 00:06:38.533473  503585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 00:06:38.537497  503585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 00:06:38.596568  503585 out.go:204]   - Booting up control plane ...
	I0730 00:06:38.596738  503585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 00:06:38.596861  503585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 00:06:38.596975  503585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 00:06:38.597134  503585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 00:06:38.597251  503585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 00:06:38.597320  503585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 00:06:38.675186  503585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0730 00:06:38.675286  503585 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0730 00:06:39.176987  503585 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.110655ms
	I0730 00:06:39.177126  503585 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0730 00:06:43.676985  503585 kubeadm.go:310] [api-check] The API server is healthy after 4.5018289s
	I0730 00:06:43.694821  503585 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0730 00:06:43.706043  503585 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0730 00:06:43.729901  503585 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0730 00:06:43.730195  503585 kubeadm.go:310] [mark-control-plane] Marking the node addons-091578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0730 00:06:43.743000  503585 kubeadm.go:310] [bootstrap-token] Using token: 4lszgu.k109gvlsncythwao
	I0730 00:06:43.744466  503585 out.go:204]   - Configuring RBAC rules ...
	I0730 00:06:43.744617  503585 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0730 00:06:43.751633  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0730 00:06:43.758261  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0730 00:06:43.761401  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0730 00:06:43.765026  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0730 00:06:43.768427  503585 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0730 00:06:44.085319  503585 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0730 00:06:44.508125  503585 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0730 00:06:45.085194  503585 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0730 00:06:45.086673  503585 kubeadm.go:310] 
	I0730 00:06:45.086755  503585 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0730 00:06:45.086773  503585 kubeadm.go:310] 
	I0730 00:06:45.086848  503585 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0730 00:06:45.086857  503585 kubeadm.go:310] 
	I0730 00:06:45.086913  503585 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0730 00:06:45.087012  503585 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0730 00:06:45.087088  503585 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0730 00:06:45.087099  503585 kubeadm.go:310] 
	I0730 00:06:45.087174  503585 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0730 00:06:45.087186  503585 kubeadm.go:310] 
	I0730 00:06:45.087245  503585 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0730 00:06:45.087259  503585 kubeadm.go:310] 
	I0730 00:06:45.087328  503585 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0730 00:06:45.087430  503585 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0730 00:06:45.087539  503585 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0730 00:06:45.087549  503585 kubeadm.go:310] 
	I0730 00:06:45.087672  503585 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0730 00:06:45.087760  503585 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0730 00:06:45.087772  503585 kubeadm.go:310] 
	I0730 00:06:45.087869  503585 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4lszgu.k109gvlsncythwao \
	I0730 00:06:45.087953  503585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 \
	I0730 00:06:45.087972  503585 kubeadm.go:310] 	--control-plane 
	I0730 00:06:45.087979  503585 kubeadm.go:310] 
	I0730 00:06:45.088051  503585 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0730 00:06:45.088058  503585 kubeadm.go:310] 
	I0730 00:06:45.088130  503585 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4lszgu.k109gvlsncythwao \
	I0730 00:06:45.088209  503585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 
	I0730 00:06:45.089092  503585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 00:06:45.089165  503585 cni.go:84] Creating CNI manager for ""
	I0730 00:06:45.089182  503585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:06:45.091035  503585 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0730 00:06:45.092349  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0730 00:06:45.102258  503585 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0730 00:06:45.119378  503585 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0730 00:06:45.119449  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:45.119513  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-091578 minikube.k8s.io/updated_at=2024_07_30T00_06_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=addons-091578 minikube.k8s.io/primary=true
	I0730 00:06:45.142329  503585 ops.go:34] apiserver oom_adj: -16
	I0730 00:06:45.246478  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:45.747518  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:46.247455  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:46.747523  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:47.247299  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:47.746862  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:48.246905  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:48.746983  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:49.246838  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:49.747341  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:50.247276  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:50.746745  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:51.246810  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:51.747325  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:52.246946  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:52.747198  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:53.246668  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:53.746601  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:54.247274  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:54.747485  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:55.247492  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:55.747313  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:56.247138  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:56.746514  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:57.247169  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:57.746516  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:58.246570  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:58.747086  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:59.247212  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:59.339609  503585 kubeadm.go:1113] duration metric: took 14.220221794s to wait for elevateKubeSystemPrivileges
	I0730 00:06:59.339661  503585 kubeadm.go:394] duration metric: took 23.441392171s to StartCluster
	I0730 00:06:59.339693  503585 settings.go:142] acquiring lock: {Name:mk89b2537c1ec20302d90ab73b167422bb363b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:59.339860  503585 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:06:59.340499  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/kubeconfig: {Name:mk6ecf4e5b07b810f1fa2b9790857d7586f0cf41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:59.340753  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0730 00:06:59.340790  503585 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:06:59.340875  503585 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0730 00:06:59.340975  503585 addons.go:69] Setting yakd=true in profile "addons-091578"
	I0730 00:06:59.340996  503585 addons.go:69] Setting default-storageclass=true in profile "addons-091578"
	I0730 00:06:59.341015  503585 addons.go:69] Setting helm-tiller=true in profile "addons-091578"
	I0730 00:06:59.341009  503585 addons.go:69] Setting cloud-spanner=true in profile "addons-091578"
	I0730 00:06:59.341029  503585 addons.go:69] Setting storage-provisioner=true in profile "addons-091578"
	I0730 00:06:59.341038  503585 addons.go:69] Setting volcano=true in profile "addons-091578"
	I0730 00:06:59.341040  503585 addons.go:234] Setting addon helm-tiller=true in "addons-091578"
	I0730 00:06:59.341046  503585 config.go:182] Loaded profile config "addons-091578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:06:59.341057  503585 addons.go:234] Setting addon volcano=true in "addons-091578"
	I0730 00:06:59.341059  503585 addons.go:69] Setting inspektor-gadget=true in profile "addons-091578"
	I0730 00:06:59.341063  503585 addons.go:69] Setting ingress=true in profile "addons-091578"
	I0730 00:06:59.341076  503585 addons.go:234] Setting addon cloud-spanner=true in "addons-091578"
	I0730 00:06:59.341083  503585 addons.go:234] Setting addon ingress=true in "addons-091578"
	I0730 00:06:59.341095  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341096  503585 addons.go:69] Setting volumesnapshots=true in profile "addons-091578"
	I0730 00:06:59.341101  503585 addons.go:69] Setting metrics-server=true in profile "addons-091578"
	I0730 00:06:59.341009  503585 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-091578"
	I0730 00:06:59.341118  503585 addons.go:234] Setting addon metrics-server=true in "addons-091578"
	I0730 00:06:59.341120  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341136  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341140  503585 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-091578"
	I0730 00:06:59.341172  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341095  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341021  503585 addons.go:69] Setting registry=true in profile "addons-091578"
	I0730 00:06:59.341601  503585 addons.go:234] Setting addon registry=true in "addons-091578"
	I0730 00:06:59.341664  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341030  503585 addons.go:234] Setting addon yakd=true in "addons-091578"
	I0730 00:06:59.341077  503585 addons.go:234] Setting addon inspektor-gadget=true in "addons-091578"
	I0730 00:06:59.341943  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.342140  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342176  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342206  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342227  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342245  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342248  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342273  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342281  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342356  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342380  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.340985  503585 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-091578"
	I0730 00:06:59.342782  503585 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-091578"
	I0730 00:06:59.342814  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.342821  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342854  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.343214  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.343242  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.343373  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.343424  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.343878  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.344462  503585 out.go:177] * Verifying Kubernetes components...
	I0730 00:06:59.344424  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.341012  503585 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-091578"
	I0730 00:06:59.340979  503585 addons.go:69] Setting gcp-auth=true in profile "addons-091578"
	I0730 00:06:59.341051  503585 addons.go:234] Setting addon storage-provisioner=true in "addons-091578"
	I0730 00:06:59.341048  503585 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-091578"
	I0730 00:06:59.341059  503585 addons.go:69] Setting ingress-dns=true in profile "addons-091578"
	I0730 00:06:59.341110  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341115  503585 addons.go:234] Setting addon volumesnapshots=true in "addons-091578"
	I0730 00:06:59.344801  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.345153  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.345244  503585 addons.go:234] Setting addon ingress-dns=true in "addons-091578"
	I0730 00:06:59.345355  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.345411  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.345461  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.345604  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.345658  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.346159  503585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:06:59.346652  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.346683  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.346716  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.346776  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.346836  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.346881  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.349702  503585 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-091578"
	I0730 00:06:59.349810  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.350244  503585 mustload.go:65] Loading cluster: addons-091578
	I0730 00:06:59.364093  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0730 00:06:59.364283  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0730 00:06:59.364761  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.365038  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.365322  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.365347  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.365431  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0730 00:06:59.365726  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.365898  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.366450  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.366514  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.366873  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.366894  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.366913  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I0730 00:06:59.367358  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.367432  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.367447  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.367454  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.367962  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.368156  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.368208  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.368486  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.368508  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.369060  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.369100  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.369590  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.370350  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.370395  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.380518  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I0730 00:06:59.381170  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.382363  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
	I0730 00:06:59.382970  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.383925  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.383950  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.384057  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0730 00:06:59.384394  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.384479  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.385572  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.385592  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.385933  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.385975  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.386650  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.386699  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.387125  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.387166  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.388109  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.388152  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.389207  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.389266  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.389450  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0730 00:06:59.397007  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0730 00:06:59.397105  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0730 00:06:59.397300  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0730 00:06:59.397475  503585 config.go:182] Loaded profile config "addons-091578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:06:59.397990  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.398137  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.398158  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.398172  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.398240  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.398267  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.399031  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.399160  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.399364  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.399378  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.399586  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.399600  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.399677  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39837
	I0730 00:06:59.399961  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.399975  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.400076  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.400170  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.400825  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.401095  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.401130  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.401204  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.401890  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.401942  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.402177  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.402553  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.404577  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.406813  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0730 00:06:59.407265  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.408221  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.408245  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.408635  503585 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0730 00:06:59.408682  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.408736  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.408958  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.409030  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.409273  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.409802  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0730 00:06:59.409823  503585 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0730 00:06:59.409847  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.413396  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.413415  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.415053  503585 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0730 00:06:59.415222  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.415316  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.415333  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.415594  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.415797  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.416024  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.416295  503585 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0730 00:06:59.416313  503585 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0730 00:06:59.416334  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.417013  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0730 00:06:59.418422  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0730 00:06:59.419440  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.419500  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0730 00:06:59.419752  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.419774  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.420050  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.420250  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.420408  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.420564  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.422203  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0730 00:06:59.423510  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0730 00:06:59.424927  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0730 00:06:59.425921  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0730 00:06:59.426034  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0730 00:06:59.426370  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.426980  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.427000  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.427197  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0730 00:06:59.427215  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0730 00:06:59.427237  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.427341  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.427515  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.430777  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.430793  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.431066  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0730 00:06:59.431264  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.431288  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.431407  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.431578  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.431757  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.431923  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.432420  503585 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0730 00:06:59.433121  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.433593  503585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0730 00:06:59.433613  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0730 00:06:59.433632  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.433712  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.433731  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.436787  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.437233  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.437261  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.437505  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.437706  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.437899  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.437924  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.438099  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.438104  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.439779  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.440058  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:06:59.440072  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:06:59.440312  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:06:59.440342  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:06:59.440357  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:06:59.440366  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:06:59.440374  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:06:59.440544  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:06:59.440559  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:06:59.440568  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	W0730 00:06:59.440675  503585 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0730 00:06:59.441171  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0730 00:06:59.442523  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.443227  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.443250  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.443621  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0730 00:06:59.443977  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0730 00:06:59.444293  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.444790  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.444803  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.444858  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.445165  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.445741  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.445761  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.446490  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.447006  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.447032  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.447531  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.447766  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.449644  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.450967  503585 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-091578"
	I0730 00:06:59.451013  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.451389  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.451440  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.451689  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.453607  503585 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0730 00:06:59.454250  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0730 00:06:59.454258  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0730 00:06:59.454710  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.455075  503585 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 00:06:59.455095  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0730 00:06:59.455115  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.455219  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.455238  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.455398  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0730 00:06:59.455625  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.455634  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I0730 00:06:59.455776  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.455951  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.456050  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.456362  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.456367  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.456379  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.456384  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.456507  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.456518  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.456641  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.456657  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.456851  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.456884  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.457081  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.457152  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.457625  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.457691  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.458809  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.459200  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.459239  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.459346  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0730 00:06:59.459489  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.459760  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.460220  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.460247  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.460550  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.460847  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.460898  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.461218  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.461404  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.461444  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.461578  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.461766  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.461930  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.462095  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.463056  503585 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0730 00:06:59.463397  503585 addons.go:234] Setting addon default-storageclass=true in "addons-091578"
	I0730 00:06:59.463440  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.463762  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.463793  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.464051  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0730 00:06:59.464464  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.465004  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.465023  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.465365  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0730 00:06:59.465532  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.465926  503585 out.go:177]   - Using image docker.io/registry:2.8.3
	I0730 00:06:59.466073  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.466109  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.466708  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.467302  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.467319  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.467342  503585 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0730 00:06:59.467358  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0730 00:06:59.467377  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.469355  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.469766  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.471361  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.471971  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.472009  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.472222  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.472407  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.472578  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.472754  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.477796  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.479887  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0730 00:06:59.479934  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34277
	I0730 00:06:59.480394  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.480547  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0730 00:06:59.481142  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.481161  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.481536  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.481606  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.482191  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.482236  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.482468  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 00:06:59.482912  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.482930  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.483521  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.483980  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.484468  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 00:06:59.485531  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0730 00:06:59.485729  503585 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 00:06:59.485750  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0730 00:06:59.485772  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.485732  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0730 00:06:59.486008  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.486235  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.486598  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.486618  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.486725  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.486747  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.486927  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.487039  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.487111  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.487385  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.487450  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.489366  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.489710  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0730 00:06:59.490148  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.490670  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.490709  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.490676  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0730 00:06:59.490908  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0730 00:06:59.490927  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.490934  503585 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0730 00:06:59.490953  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.490978  503585 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0730 00:06:59.491091  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.491190  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.491326  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.491482  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.491496  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.491636  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.492093  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0730 00:06:59.492107  503585 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0730 00:06:59.492113  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.492122  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.492292  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.494039  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.495580  503585 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0730 00:06:59.495951  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496070  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496431  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.496527  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496790  503585 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0730 00:06:59.496805  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0730 00:06:59.496820  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.496836  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.496855  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496887  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.497119  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.497120  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.497551  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.497945  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.498192  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.498244  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.498395  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.499376  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.499710  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.499730  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.499870  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.500047  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.500210  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.500388  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.502691  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I0730 00:06:59.503123  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.503568  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.503592  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.503958  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.504161  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.505946  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.507983  503585 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0730 00:06:59.509242  503585 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 00:06:59.509263  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0730 00:06:59.509286  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.510661  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
	I0730 00:06:59.511264  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.511889  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.511908  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.512442  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.513087  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.513130  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.513387  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.517381  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.517413  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.517524  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.517704  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.517852  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.518000  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	W0730 00:06:59.521243  503585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56976->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.521275  503585 retry.go:31] will retry after 192.776047ms: ssh: handshake failed: read tcp 192.168.39.1:56976->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.521913  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0730 00:06:59.522380  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.522585  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0730 00:06:59.522850  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.522877  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.522940  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.523257  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.523446  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.523887  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.523905  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.524523  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.524810  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.525589  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.527889  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.528191  503585 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 00:06:59.529848  503585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:06:59.529867  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0730 00:06:59.529885  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.529961  503585 out.go:177]   - Using image docker.io/busybox:stable
	I0730 00:06:59.531096  503585 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0730 00:06:59.532555  503585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 00:06:59.532573  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0730 00:06:59.532592  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.533273  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.533746  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.533769  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.533933  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.534172  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.534315  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.534472  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.535688  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	W0730 00:06:59.535873  503585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0730 00:06:59.535894  503585 retry.go:31] will retry after 225.32749ms: ssh: handshake failed: EOF
	I0730 00:06:59.536093  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.536121  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.536305  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.536468  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.536570  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.536638  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0730 00:06:59.536820  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.537093  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.537634  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.537653  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.537968  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.538165  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	W0730 00:06:59.539314  503585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56992->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.539350  503585 retry.go:31] will retry after 343.324768ms: ssh: handshake failed: read tcp 192.168.39.1:56992->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.539582  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.539814  503585 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0730 00:06:59.539828  503585 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0730 00:06:59.539846  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.542830  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.543224  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.543245  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.543383  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.543543  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.543670  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.543797  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.843337  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0730 00:06:59.843365  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0730 00:06:59.913024  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0730 00:06:59.913062  503585 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0730 00:06:59.928765  503585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0730 00:06:59.928806  503585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0730 00:06:59.932177  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 00:06:59.970105  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 00:06:59.972228  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0730 00:06:59.974920  503585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0730 00:06:59.974944  503585 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0730 00:06:59.983115  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0730 00:06:59.983137  503585 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0730 00:06:59.991932  503585 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0730 00:06:59.991960  503585 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0730 00:07:00.027140  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 00:07:00.042042  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0730 00:07:00.042076  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0730 00:07:00.062815  503585 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0730 00:07:00.062847  503585 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0730 00:07:00.080521  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0730 00:07:00.102436  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 00:07:00.102469  503585 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0730 00:07:00.105326  503585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:07:00.105526  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0730 00:07:00.138511  503585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0730 00:07:00.138535  503585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0730 00:07:00.150300  503585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0730 00:07:00.150326  503585 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0730 00:07:00.169210  503585 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0730 00:07:00.169300  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0730 00:07:00.177603  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0730 00:07:00.177628  503585 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0730 00:07:00.191086  503585 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0730 00:07:00.191116  503585 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0730 00:07:00.207571  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0730 00:07:00.207606  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0730 00:07:00.323650  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 00:07:00.327681  503585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0730 00:07:00.327712  503585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0730 00:07:00.344053  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:07:00.344500  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0730 00:07:00.392580  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0730 00:07:00.411691  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0730 00:07:00.411725  503585 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0730 00:07:00.413533  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0730 00:07:00.413557  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0730 00:07:00.439634  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 00:07:00.469406  503585 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0730 00:07:00.469435  503585 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0730 00:07:00.486667  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0730 00:07:00.486689  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0730 00:07:00.576154  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0730 00:07:00.576191  503585 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0730 00:07:00.645426  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0730 00:07:00.645468  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0730 00:07:00.722682  503585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0730 00:07:00.722722  503585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0730 00:07:00.728598  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0730 00:07:00.827435  503585 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 00:07:00.827461  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0730 00:07:00.925005  503585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0730 00:07:00.925033  503585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0730 00:07:00.925565  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0730 00:07:00.925599  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0730 00:07:01.063902  503585 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0730 00:07:01.063943  503585 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0730 00:07:01.066096  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 00:07:01.121144  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0730 00:07:01.121171  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0730 00:07:01.295864  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.363636237s)
	I0730 00:07:01.295936  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:01.295951  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:01.296310  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:01.296330  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:01.296330  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:01.296345  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:01.296355  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:01.296635  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:01.296651  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:01.305854  503585 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 00:07:01.305882  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0730 00:07:01.488469  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0730 00:07:01.488509  503585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0730 00:07:01.627165  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 00:07:01.754381  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0730 00:07:01.754424  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0730 00:07:02.031152  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0730 00:07:02.031185  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0730 00:07:02.392908  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 00:07:02.392943  503585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0730 00:07:02.746106  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 00:07:06.501255  503585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0730 00:07:06.501313  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:07:06.504790  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.505292  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:07:06.505323  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.505625  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:07:06.505855  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:07:06.506074  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:07:06.506256  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:07:06.829073  503585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0730 00:07:06.888096  503585 addons.go:234] Setting addon gcp-auth=true in "addons-091578"
	I0730 00:07:06.888162  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:07:06.888480  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:07:06.888517  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:07:06.904936  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36419
	I0730 00:07:06.905413  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:07:06.905993  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:07:06.906020  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:07:06.906407  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:07:06.906993  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:07:06.907039  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:07:06.924001  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0730 00:07:06.924474  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:07:06.925035  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:07:06.925063  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:07:06.925482  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:07:06.925739  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:07:06.927706  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:07:06.928001  503585 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0730 00:07:06.928027  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:07:06.932180  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.932883  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:07:06.932916  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.933083  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:07:06.933361  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:07:06.933565  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:07:06.933747  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:07:07.836894  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.866740722s)
	I0730 00:07:07.836939  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.864673997s)
	I0730 00:07:07.836952  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.836966  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.836987  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837013  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837039  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.80986944s)
	I0730 00:07:07.837081  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837088  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.756538737s)
	I0730 00:07:07.837098  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837110  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837106  503585 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.731751326s)
	I0730 00:07:07.837148  503585 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.731604591s)
	I0730 00:07:07.837166  503585 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0730 00:07:07.837210  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.513529974s)
	I0730 00:07:07.837227  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837236  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837329  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.493234037s)
	I0730 00:07:07.837346  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837354  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837409  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.492876354s)
	I0730 00:07:07.837442  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837455  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837523  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.444908018s)
	I0730 00:07:07.837539  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837547  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837810  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.398148276s)
	I0730 00:07:07.837834  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837843  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.838117  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.109488528s)
	I0730 00:07:07.838140  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.838149  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.838167  503585 node_ready.go:35] waiting up to 6m0s for node "addons-091578" to be "Ready" ...
	I0730 00:07:07.837121  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.838275  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.772144948s)
	W0730 00:07:07.838304  503585 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 00:07:07.838321  503585 retry.go:31] will retry after 333.750071ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 00:07:07.838377  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.211175238s)
	I0730 00:07:07.838403  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.838414  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840133  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840141  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840147  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840153  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840157  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840165  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840167  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840168  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840173  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840176  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840177  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840182  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840187  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840190  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840195  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840232  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840253  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840259  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840267  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840274  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840317  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840326  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840340  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840347  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840355  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840362  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840371  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840380  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840385  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840402  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840410  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840417  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840424  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840443  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840468  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840476  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840484  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840490  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840551  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840563  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840570  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840578  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840587  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840637  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840644  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840655  503585 addons.go:475] Verifying addon metrics-server=true in "addons-091578"
	I0730 00:07:07.840349  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840677  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840734  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840756  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840763  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840799  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840831  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840838  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840994  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.841019  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.841031  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.841039  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.841048  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.841426  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.841451  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.841477  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.841483  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.841908  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.841919  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842267  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.842292  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842299  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842403  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.842437  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842444  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842577  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842588  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842790  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.842793  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842804  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842813  503585 addons.go:475] Verifying addon registry=true in "addons-091578"
	I0730 00:07:07.843070  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.843099  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.843105  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.843113  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.843120  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.843345  503585 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-091578 service yakd-dashboard -n yakd-dashboard
	
	I0730 00:07:07.844251  503585 out.go:177] * Verifying registry addon...
	I0730 00:07:07.846486  503585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0730 00:07:07.847042  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.847046  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.847056  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.847046  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.847070  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.847080  503585 addons.go:475] Verifying addon ingress=true in "addons-091578"
	I0730 00:07:07.847061  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.848511  503585 out.go:177] * Verifying ingress addon...
	I0730 00:07:07.850395  503585 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0730 00:07:07.852120  503585 node_ready.go:49] node "addons-091578" has status "Ready":"True"
	I0730 00:07:07.852141  503585 node_ready.go:38] duration metric: took 13.956338ms for node "addons-091578" to be "Ready" ...
	I0730 00:07:07.852152  503585 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:07:07.862259  503585 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0730 00:07:07.862279  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:07.862519  503585 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0730 00:07:07.862542  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:07.891923  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.891954  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.892386  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.892444  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.892453  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.893067  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.893088  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.893352  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.893374  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	W0730 00:07:07.893483  503585 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0730 00:07:07.896029  503585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fxsmn" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.924855  503585 pod_ready.go:92] pod "coredns-7db6d8ff4d-fxsmn" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:07.924882  503585 pod_ready.go:81] duration metric: took 28.821665ms for pod "coredns-7db6d8ff4d-fxsmn" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.924893  503585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lznwz" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.968336  503585 pod_ready.go:92] pod "coredns-7db6d8ff4d-lznwz" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:07.968375  503585 pod_ready.go:81] duration metric: took 43.473374ms for pod "coredns-7db6d8ff4d-lznwz" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.968392  503585 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.995657  503585 pod_ready.go:92] pod "etcd-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:07.995689  503585 pod_ready.go:81] duration metric: took 27.288893ms for pod "etcd-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.995700  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.011704  503585 pod_ready.go:92] pod "kube-apiserver-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:08.011739  503585 pod_ready.go:81] duration metric: took 16.031029ms for pod "kube-apiserver-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.011754  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.173065  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 00:07:08.244306  503585 pod_ready.go:92] pod "kube-controller-manager-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:08.244348  503585 pod_ready.go:81] duration metric: took 232.584167ms for pod "kube-controller-manager-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.244364  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4j5tl" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.343638  503585 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-091578" context rescaled to 1 replicas
	I0730 00:07:08.370288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:08.373607  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:08.647385  503585 pod_ready.go:92] pod "kube-proxy-4j5tl" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:08.647412  503585 pod_ready.go:81] duration metric: took 403.039444ms for pod "kube-proxy-4j5tl" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.647422  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.832682  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.086499617s)
	I0730 00:07:08.832771  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:08.832778  503585 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.904747095s)
	I0730 00:07:08.832796  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:08.833308  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:08.833345  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:08.833364  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:08.833378  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:08.833389  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:08.833676  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:08.833693  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:08.833706  503585 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-091578"
	I0730 00:07:08.833737  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:08.834514  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 00:07:08.835318  503585 out.go:177] * Verifying csi-hostpath-driver addon...
	I0730 00:07:08.836926  503585 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0730 00:07:08.838025  503585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0730 00:07:08.838202  503585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0730 00:07:08.838226  503585 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0730 00:07:08.882257  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:08.885055  503585 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0730 00:07:08.885075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:08.891911  503585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0730 00:07:08.891936  503585 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0730 00:07:08.902649  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:09.018186  503585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 00:07:09.018215  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0730 00:07:09.042683  503585 pod_ready.go:92] pod "kube-scheduler-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:09.042716  503585 pod_ready.go:81] duration metric: took 395.286009ms for pod "kube-scheduler-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:09.042729  503585 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:09.075767  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 00:07:09.344537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:09.350593  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:09.354290  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:09.844124  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:09.868807  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:09.876003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:10.251897  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.078780501s)
	I0730 00:07:10.251962  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.251983  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.252377  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.252429  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.252452  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.252470  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.252479  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:10.252793  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:10.252837  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.252856  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.371528  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:10.391220  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:10.394600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:10.472057  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.396230203s)
	I0730 00:07:10.472119  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.472130  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.472537  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:10.472585  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.472602  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.472628  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.472639  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.472904  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.472926  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.474895  503585 addons.go:475] Verifying addon gcp-auth=true in "addons-091578"
	I0730 00:07:10.476314  503585 out.go:177] * Verifying gcp-auth addon...
	I0730 00:07:10.478269  503585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0730 00:07:10.491650  503585 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0730 00:07:10.491678  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:10.844058  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:10.850531  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:10.853830  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:10.985707  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:11.048454  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:11.363968  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:11.364151  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:11.366534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:11.482243  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:11.843648  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:11.851109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:11.853914  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:11.983362  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:12.343029  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:12.350822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:12.353918  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:12.482620  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:12.843279  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:12.851392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:12.853852  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:12.981572  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:13.049277  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:13.343183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:13.351162  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:13.353619  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:13.482594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:13.844812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:13.850358  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:13.854003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:13.981783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:14.343820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:14.350640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:14.353725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:14.482423  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:14.843094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:14.851043  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:14.853516  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:14.982766  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:15.345791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:15.351650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:15.354306  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:15.482280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:15.548741  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:15.843969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:15.850979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:15.853723  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:15.982500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:16.343022  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:16.351425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:16.353870  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:16.482418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:16.844570  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:16.850630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:16.853427  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:16.982819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:17.343184  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:17.350630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:17.353605  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:17.482455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:17.844008  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:17.851249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:17.853367  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:17.982088  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:18.048069  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:18.343185  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:18.352097  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:18.353646  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:18.482836  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:18.843418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:18.852183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:18.854045  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:18.981929  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:19.343964  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:19.351341  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:19.353797  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:19.482691  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:19.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:19.850740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:19.853946  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:19.982383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:20.048285  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:20.343084  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:20.351306  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:20.353545  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:20.482522  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:20.843251  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:20.851325  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:20.854210  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:20.982252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:21.344543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:21.352046  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:21.357792  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:21.483335  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:21.843292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:21.851477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:21.854734  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:21.982514  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:22.051934  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:22.344558  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:22.351220  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:22.353738  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:22.482800  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:22.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:22.850770  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:22.855079  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:22.981891  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:23.343642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:23.350544  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:23.353887  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:23.482208  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:23.843137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:23.851554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:23.853890  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:23.981651  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:24.345138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:24.350965  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:24.354406  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:24.481869  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:24.549429  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:24.843310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:24.851671  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:24.853849  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:24.982661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:25.343633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:25.354673  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:25.357186  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:25.481940  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:25.843410  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:25.851271  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:25.853501  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:25.982553  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:26.343742  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:26.350666  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:26.353424  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:26.482416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:26.843750  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:26.851543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:26.853800  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:26.982796  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:27.049414  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:27.343927  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:27.350983  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:27.353387  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:27.482584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:27.844638  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:27.850444  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:27.853499  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:27.982418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:28.343453  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:28.352261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:28.354998  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:28.481881  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:28.843532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:28.850819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:28.854958  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:28.981706  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:29.345364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:29.351298  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:29.353697  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:29.482500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:29.549440  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:29.843766  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:29.851129  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:29.853438  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:29.982790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:30.344033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:30.352013  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:30.354945  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:30.482106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:30.843546  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:30.850394  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:30.854734  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:30.981845  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:31.343173  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:31.355052  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:31.355350  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:31.482412  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:31.844178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:31.851431  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:31.853676  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:31.983434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:32.048898  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:32.344537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:32.353248  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:32.355143  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:32.481697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:32.843449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:32.850768  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:32.854451  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:32.982512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:33.343226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:33.351094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:33.353960  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:33.481606  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:33.843603  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:33.850413  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:33.853352  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:33.982118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:34.343814  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:34.350773  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:34.353950  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:34.482323  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:34.549131  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:34.843930  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:34.853285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:34.854774  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:34.982570  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:35.343810  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:35.352082  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:35.354756  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:35.484072  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:35.844696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:35.851226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:35.853802  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:35.982383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:36.343921  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:36.350292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:36.353667  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:36.482191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:36.844525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:36.851191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:36.853773  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:36.982629  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:37.048807  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:37.344210  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:37.351733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:37.354545  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:37.482985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:37.843110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:37.850999  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:37.853463  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:37.982663  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:38.344275  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:38.351930  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:38.353896  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:38.481790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:38.843775  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:38.850960  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:38.853644  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:38.982774  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:39.050688  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:39.343792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:39.350633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:39.353791  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:39.482650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:39.844123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:39.851256  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:39.853702  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:39.982556  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:40.344233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:40.351500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:40.353800  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:40.483421  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:40.844735  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:40.851657  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:40.854045  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:40.982308  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:41.343507  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:41.351013  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:41.353446  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:41.482726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:41.548910  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:41.844094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:41.852902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:41.856902  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:41.981767  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:42.402445  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:42.402556  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:42.404965  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:42.482860  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:42.843998  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:42.850290  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:42.853958  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:42.981824  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:43.344249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:43.351537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:43.354198  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:43.482118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:43.843995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:43.851292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:43.854128  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:43.981989  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:44.049006  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:44.343658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:44.350951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:44.354188  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:44.482703  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:44.843727  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:44.851527  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:44.854098  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:44.982598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:45.343992  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:45.350988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:45.353584  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:45.482285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:45.844400  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:45.851200  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:45.853725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:45.982732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:46.050116  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:46.344117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:46.350925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:46.353680  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:46.482506  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:46.844820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:46.854828  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:46.856621  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:46.983261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:47.344886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:47.351811  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:47.354270  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:47.482478  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:47.843963  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:47.850749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:47.853836  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:47.981927  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:48.343967  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:48.351000  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:48.354272  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:48.482388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:48.548406  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:48.843728  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:48.850811  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:48.854731  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:48.982576  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:49.343680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:49.351103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:49.353483  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:49.481997  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:49.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:49.852118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:49.854888  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:49.981817  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:50.344186  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:50.350847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:50.353821  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:50.482301  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:50.548463  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:50.843569  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:50.850650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:50.853617  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:50.982253  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:51.342881  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:51.353405  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:51.354854  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:51.482035  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:51.843645  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:51.850702  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:51.854439  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:51.982420  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:52.345146  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:52.351908  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:52.354479  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:52.482650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:52.549236  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:52.843406  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:52.856627  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:52.856790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:52.981961  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:53.344377  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:53.351310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:53.353565  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:53.482627  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:53.844252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:53.851579  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:53.854121  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:53.983932  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:54.343420  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:54.350339  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:54.354100  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:54.482280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:54.551800  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:54.843645  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:54.850886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:54.854015  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:54.981828  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:55.343220  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:55.352132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:55.353792  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:55.482929  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:55.844070  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:55.850555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:55.853810  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:55.982822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:56.344304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:56.352439  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:56.354203  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:56.482975  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:56.843732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:56.851837  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:56.854234  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:56.982827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:57.048960  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:57.343973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:57.351189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:57.353751  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:57.482479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:57.843386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:57.850185  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:57.853866  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:57.982493  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:58.343319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:58.352882  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:58.354700  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:58.482480  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:58.844366  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:58.851528  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:58.853994  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:58.981842  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:59.343414  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:59.350561  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:59.353615  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:59.482406  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:59.549113  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:59.844593  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:59.851651  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:59.854335  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:59.982304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:00.344440  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:00.350713  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:00.353913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:00.481656  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:00.844904  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:00.850898  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:00.854095  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:00.981928  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:01.343196  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:01.356435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:01.360478  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:01.481858  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:01.549395  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:01.843259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:01.851569  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:01.853772  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:01.982694  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:02.346056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:02.351333  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:02.354280  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:02.481679  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:02.847773  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:02.852459  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:02.855414  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:02.982484  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:03.343519  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:03.351379  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:03.353705  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:03.482433  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:03.549702  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:03.844566  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:03.850863  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:03.853702  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:03.983614  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:04.344502  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:04.350752  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:04.354718  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:04.482062  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:04.844407  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:04.850879  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:04.854230  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:04.983988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:05.343112  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:05.351050  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:05.353725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:05.482813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:05.549778  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:05.844354  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:05.851349  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:05.854180  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:05.981475  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:06.346987  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:06.352486  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:06.355978  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:06.482580  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:06.843438  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:06.851851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:06.853714  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:06.982516  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:07.343375  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:07.352293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:07.353946  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:07.482301  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:07.843326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:07.851419  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:07.853847  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:07.982710  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:08.049580  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:08.343908  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:08.351812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:08.355340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:08.482296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:08.842516  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:08.851189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:08.853849  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:08.981641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:09.343837  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:09.351010  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:09.353640  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:09.482832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:09.843815  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:09.851064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:09.853931  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:09.981643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:10.346325  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:10.357065  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:10.357293  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:10.482178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:10.548688  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:10.843439  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:10.850015  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:10.853667  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:10.982477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:11.343232  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:11.351455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:11.353779  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:11.481598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:11.843994  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:11.851724  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:11.854685  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:11.982424  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:12.344106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:12.350926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:12.353582  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:12.482786  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:12.549239  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:12.843526  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:12.852261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:12.854703  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:12.983132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:13.343813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:13.351243  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:13.354540  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:13.482045  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:13.843789  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:13.851117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:13.853787  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:13.983451  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:14.344699  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:14.350063  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:14.353452  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:14.481880  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:14.844364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:14.850636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:14.854423  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:14.982218  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:15.049097  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:15.344347  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:15.350326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:15.355529  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:15.482207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:15.843723  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:15.850721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:15.854186  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:15.981960  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:16.344534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:16.351788  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:16.354742  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:16.482803  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:16.843356  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:16.850435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:16.853578  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:16.982633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:17.049386  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:17.343207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:17.352620  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:17.354727  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:17.482927  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:17.843952  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:17.853225  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:17.856305  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:17.982242  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:18.344537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:18.350414  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:18.353542  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:18.482713  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:18.843442  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:18.850775  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:18.853691  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:18.982612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:19.051202  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:19.345866  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:19.353119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:19.355450  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:19.482873  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:19.844424  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:19.852187  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:19.858651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:19.982372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:20.344142  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:20.350815  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:20.353766  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:20.483378  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:20.844183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:20.851249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:20.853904  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:20.981969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:21.346144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:21.354796  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:21.359119  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:21.481956  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:21.549839  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:21.843890  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:21.850853  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:21.854318  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:21.982291  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:22.344121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:22.350877  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:22.354379  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:22.482298  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:22.844064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:22.851529  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:22.854293  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:22.981883  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:23.343058  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:23.351332  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:23.353502  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:23.482281  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:23.843326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:23.851691  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:23.854082  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:23.981979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:24.049497  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:24.344473  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:24.350162  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:24.353749  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:24.482198  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:24.843791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:24.850425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:24.853272  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:24.982365  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:25.345663  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:25.350363  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:25.354068  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:25.482261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:25.844231  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:25.851270  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:25.853839  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:25.981852  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:26.346367  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:26.350230  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:26.353913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:26.481771  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:26.548681  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:26.843434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:26.850514  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:26.853922  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:26.981519  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:27.343641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:27.353773  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:27.355605  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:27.482525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:27.844320  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:27.850521  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:27.853641  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:27.982645  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:28.348429  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:28.350861  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:28.355083  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:28.481909  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:28.552666  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:28.844060  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:28.851689  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:28.853966  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:28.985900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:29.343608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:29.351025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:29.353693  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:29.482396  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:29.843837  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:29.850847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:29.853867  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:29.982249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:30.344425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:30.350400  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:30.353428  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:30.482106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:30.843951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:30.850610  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:30.853670  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:30.982417  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:31.049668  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:31.344158  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:31.351175  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:31.353829  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:31.481603  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:31.844127  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:31.851301  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:31.854833  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:31.981856  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:32.344985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:32.351313  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:32.354007  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:32.482946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:32.843128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:32.851210  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:32.854116  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:32.981886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:33.343119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:33.351149  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:33.353567  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:33.482662  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:33.548680  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:33.843784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:33.850809  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:33.853862  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:33.981873  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:34.345295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:34.351004  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:34.353981  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:34.482126  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:34.843244  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:34.851134  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:34.853550  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:34.982223  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:35.343307  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:35.351774  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:35.354003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:35.482555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:35.548963  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:35.844251  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:35.851128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:35.853572  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:35.982897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:36.345134  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:36.350876  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:36.354017  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:36.481841  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:36.844262  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:36.851323  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:36.853775  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:36.984494  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:37.343164  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:37.351257  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:37.354435  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:37.482369  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:37.843194  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:37.851449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:37.853983  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:37.981950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:38.049088  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:38.345455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:38.351277  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:38.353940  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:38.482205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:38.843435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:38.850014  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:38.853716  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:38.982130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:39.343234  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:39.351273  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:39.355103  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:39.482966  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:39.845312  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:39.851878  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:39.854780  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:39.984121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:40.344295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:40.350806  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:40.353466  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:40.482588  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:40.549160  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:40.842868  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:40.851126  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:40.853468  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:40.982288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:41.343843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:41.350739  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:41.353504  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:41.482136  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:41.843901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:41.850975  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:41.853513  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:41.982247  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:42.346865  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:42.351761  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:42.355171  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:42.482308  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:42.844479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:42.851105  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:42.853819  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:42.981848  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:43.049144  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:43.343956  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:43.351564  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:43.354796  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:43.483198  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:43.844033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:43.851819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:43.854506  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:43.982306  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:44.345602  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:44.351168  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:44.354494  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:44.481824  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:44.844103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:44.851630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:44.855578  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:44.981946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:45.344395  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:45.355760  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:45.357617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:45.482587  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:45.550116  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:45.844547  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:45.850740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:45.853815  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:45.981729  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:46.345207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:46.352490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:46.354123  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:46.481986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:46.844701  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:46.852432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:46.854034  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:46.981906  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:47.344586  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:47.351945  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:47.354351  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:47.482233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:47.844824  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:47.851380  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:47.853924  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:47.982700  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:48.049817  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:48.347034  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:48.351478  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:48.353949  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:48.481985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:48.852685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:48.864491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:48.864663  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:48.983055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:49.344491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:49.352129  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:49.355018  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:49.481982  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:49.843591  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:49.850305  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:49.854167  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:49.982203  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:50.348233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:50.350364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:50.354610  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:50.482432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:50.549021  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:50.843647  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:50.850923  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:50.853562  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:50.982696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:51.342946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:51.350756  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:51.353857  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:51.481525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:51.844180  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:51.851458  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:51.854096  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:51.982079  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:52.349661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:52.352121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:52.354572  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:52.483025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:52.549839  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:52.843914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:52.851765  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:52.854814  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:52.983173  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:53.343219  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:53.351458  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:53.354685  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:53.483175  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:53.842972  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:53.850871  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:53.854151  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:53.983215  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:54.347722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:54.350270  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:54.353723  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:54.482428  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:54.843280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:54.851289  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:54.853644  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:54.983269  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:55.047846  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:55.344405  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:55.350295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:55.354736  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:55.482816  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:55.845449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:55.850280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:55.854434  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:55.982579  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:56.342979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:56.351746  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:56.354704  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:56.482463  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:56.842955  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:56.851296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:56.853943  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:56.981715  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:57.048975  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:57.343928  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:57.350914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:57.353455  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:57.482444  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:57.844233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:57.851886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:57.855779  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:57.982704  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:58.346944  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:58.350842  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:58.354157  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:58.482360  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:58.843329  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:58.851388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:58.854029  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:58.981513  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:59.343827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:59.351040  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:59.353885  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:59.483484  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:59.549538  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:59.844437  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:59.851150  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:59.854798  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:59.983489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:00.345368  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:00.351061  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:00.353477  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:00.482455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:00.843914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:00.850032  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:00.853651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:00.982840  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:01.343389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:01.350513  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:01.354476  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:01.481991  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:01.843544  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:01.851488  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:01.853996  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:01.981763  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:02.049081  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:02.345994  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:02.350760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:02.353779  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:02.481640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:02.844969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:02.850505  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:02.853974  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:02.981759  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:03.343685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:03.351395  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:03.353906  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:03.481667  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:03.844378  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:03.852379  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:03.854111  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:03.982059  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:04.342749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:04.350615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:04.353945  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:04.481898  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:04.548654  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:04.843359  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:04.850476  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:04.855158  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:04.982035  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:05.342811  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:05.351255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:05.354131  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:05.482194  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:05.842979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:05.851489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:05.854323  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:05.983365  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:06.343875  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:06.351085  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:06.353988  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:06.481688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:06.548855  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:06.843490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:06.850552  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:06.853634  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:06.982288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:07.344518  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:07.351104  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:07.354677  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:07.483706  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:07.843510  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:07.850563  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:07.853707  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:07.982642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:08.348269  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:08.351137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:08.353745  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:08.482937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:08.549239  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:08.842797  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:08.850715  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:08.853927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:08.981988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:09.343688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:09.350327  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:09.353933  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:09.481650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:09.843156  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:09.851075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:09.853239  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:09.981896  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:10.346709  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:10.350332  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:10.353871  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:10.481962  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:10.549326  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:10.846142  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:10.851380  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:10.854131  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:10.981950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:11.342760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:11.351475  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:11.354716  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:11.481980  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:11.844018  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:11.851143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:11.854126  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:11.982094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:12.343336  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:12.351435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:12.354564  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:12.481913  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:12.844409  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:12.858995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:12.859349  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:12.982371  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:13.047986  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:13.343733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:13.350474  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:13.353362  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:13.482596  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:13.843577  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:13.850561  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:13.854064  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:13.982587  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:14.344426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:14.351050  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:14.354039  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:14.481428  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:14.843405  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:14.851688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:14.854448  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:14.983435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:15.048413  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:15.344940  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:15.422148  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:15.422478  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:15.541033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:15.861231  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:15.871577  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:15.874902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:15.981679  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:16.344157  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:16.351109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:16.353556  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:16.482460  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:16.843443  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:16.851023  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:16.853617  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:16.983038  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:17.048937  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:17.344522  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:17.351988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:17.354022  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:17.481705  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:17.843728  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:17.850529  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:17.853608  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:17.982364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:18.345261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:18.352653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:18.354632  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:18.482504  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:18.846635  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:18.850787  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:18.854033  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:18.982416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:19.049862  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:19.343851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:19.350371  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:19.354969  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:19.481430  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:19.843765  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:19.853128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:19.854938  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:19.982577  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:20.344587  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:20.350943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:20.354586  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:20.482027  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:20.843489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:20.850852  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:20.854615  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:20.981951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:21.343534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:21.350676  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:21.353546  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:21.483091  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:21.552816  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:21.842926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:21.851041  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:21.853806  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:21.981822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:22.346652  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:22.350918  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:22.354512  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:22.483615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:22.844512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:22.851436  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:22.854024  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:22.981763  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:23.343474  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:23.351895  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:23.354507  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:23.482230  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:23.844211  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:23.851319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:23.853457  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:23.982146  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:24.048805  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:24.343636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:24.350868  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:24.354075  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:24.481397  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:24.843485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:24.850372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:24.853982  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:24.982052  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:25.343976  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:25.350534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:25.354056  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:25.482473  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:25.844525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:25.855468  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:25.863745  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:25.983532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:26.049822  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:26.349151  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:26.351199  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:26.353278  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:26.481925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:26.846445  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:26.851304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:26.854300  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:26.982266  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:27.343820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:27.351209  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:27.354164  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:27.482529  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:27.843895  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:27.853028  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:27.855339  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:27.982314  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:28.346105  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:28.351483  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:28.353521  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:28.482641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:28.549109  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:28.843423  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:28.850303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:28.854205  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:28.981708  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:29.343628  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:29.351292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:29.353848  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:29.481742  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:29.843661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:29.853236  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:29.854806  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:29.983347  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:30.349599  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:30.351544  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:30.354270  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:30.482183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:30.549953  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:30.844600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:30.851661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:30.854241  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:30.981847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:31.344143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:31.350855  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:31.354284  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:31.481630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:31.842954  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:31.851470  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:31.854089  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:31.982056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:32.346240  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:32.351135  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:32.353646  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:32.482680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:32.844381  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:32.850507  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:32.854280  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:32.982570  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:33.048670  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:33.343861  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:33.351032  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:33.353598  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:33.482957  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:33.843188  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:33.851435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:33.854144  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:33.981922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:34.351446  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:34.353901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:34.357389  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:34.482993  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:34.845060  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:34.850949  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:34.854678  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:34.982792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:35.048826  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:35.344547  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:35.350373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:35.353706  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:35.483599  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:35.844373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:35.851834  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:35.856373  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:35.981756  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:36.349389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:36.352624  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:36.355147  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:36.482110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:36.843579  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:36.850519  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:36.853931  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:36.981818  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:37.344409  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:37.350749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:37.353977  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:37.482103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:37.548506  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:37.844111  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:37.851263  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:37.853508  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:37.982524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:38.347265  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:38.352497  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:38.354292  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:38.482133  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:38.843490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:38.850839  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:38.854221  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:38.982191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:39.343888  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:39.351560  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:39.354463  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:39.482696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:39.548875  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:39.844144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:39.851387  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:39.853913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:39.982643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:40.346429  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:40.350484  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:40.353677  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:40.483250  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:40.844009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:40.851180  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:40.856458  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:40.982090  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:41.343520  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:41.350791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:41.353748  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:41.483255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:41.843787  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:41.851234  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:41.854287  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:41.982640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:42.050122  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:42.344677  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:42.350830  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:42.354080  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:42.482210  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:42.843872  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:42.855659  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:42.858299  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:42.982018  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:43.343608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:43.351199  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:43.353686  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:43.482598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:43.843617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:43.850653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:43.853919  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:43.981668  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:44.342936  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:44.352083  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:44.354469  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:44.482177  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:44.549552  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:44.843427  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:44.850367  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:44.854231  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:44.982181  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:45.344138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:45.351273  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:45.353524  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:45.482110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:45.844416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:45.851238  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:45.853626  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:45.983171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:46.343426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:46.350334  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:46.353825  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:46.482611  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:46.843235  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:46.851125  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:46.853770  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:46.982225  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:47.049151  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:47.344543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:47.350443  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:47.354275  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:47.481867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:47.844343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:47.850502  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:47.853652  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:47.982438  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:48.347810  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:48.350607  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:48.353649  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:48.482555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:48.844171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:48.853494  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:48.854415  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:48.982009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:49.344643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:49.351269  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:49.353644  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:49.482261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:49.549283  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:49.843334  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:49.851401  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:49.853868  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:49.982204  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:50.343728  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:50.350786  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:50.354156  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:50.482411  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:50.843727  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:50.850545  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:50.853536  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:50.982598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:51.344164  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:51.351916  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:51.357436  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:51.482369  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:51.844060  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:51.852334  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:51.853973  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:51.981652  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:52.049061  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:52.345338  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:52.351560  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:52.354672  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:52.482718  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:52.843810  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:52.851255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:52.853987  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:52.981793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:53.343373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:53.354252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:53.356979  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:53.481414  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:53.843508  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:53.853113  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:53.855440  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:53.982390  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:54.346586  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:54.352354  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:54.354930  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:54.481359  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:54.548352  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:54.845821  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:54.852628  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:54.854722  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:54.982476  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:55.344900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:55.350103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:55.354316  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:55.482206  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:55.844327  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:55.851669  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:55.853955  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:55.982154  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:56.346088  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:56.351852  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:56.354320  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:56.482538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:56.549083  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:56.843340  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:56.850116  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:56.853967  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:56.982193  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:57.344189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:57.351539  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:57.357821  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:57.809944  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:57.846290  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:57.851755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:57.854733  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:57.982581  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:58.349265  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:58.354388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:58.355035  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:58.483040  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:58.550464  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:58.844309  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:58.854484  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:58.854588  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:58.982953  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:59.346793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:59.352954  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:59.355019  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:59.482527  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:59.846095  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:59.851477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:59.854072  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:59.982076  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:00.347721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:00.355202  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:00.355267  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:00.482896  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:00.846926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:00.855323  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:00.855922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:00.982909  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:01.050126  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:01.347150  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:01.350847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:01.354451  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:01.482319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:01.844201  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:01.851549  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:01.853890  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:01.982191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:02.344284  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:02.350744  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:02.353914  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:02.481748  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:02.844261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:02.851500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:02.854311  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:02.982624  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:03.344311  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:03.352079  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:03.353439  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:03.482737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:03.549136  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:03.845649  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:03.850727  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:03.854379  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:03.982343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:04.345094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:04.351384  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:04.354356  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:04.482499  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:04.845184  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:04.861119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:04.861405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:04.982489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:05.345111  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:05.351109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:05.353521  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:05.483466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:05.844353  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:05.852006  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:05.854112  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:05.981990  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:06.049577  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:06.363740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:06.366704  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:06.371160  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:06.482611  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:06.844268  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:06.851280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:06.853864  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:06.981892  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:07.345906  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:07.352657  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:07.356175  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:07.481744  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:07.844120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:07.852386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:07.853896  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:07.981908  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:08.345984  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:08.352178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:08.354975  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:08.482326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:08.548340  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:08.846677  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:08.852139  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:08.854927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:08.981863  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:09.343676  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:09.350865  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:09.353850  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:09.482091  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:09.843876  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:09.850863  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:09.853765  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:09.981733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:10.343805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:10.353665  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:10.358100  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:10.481821  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:10.549143  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:10.843684  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:10.851156  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:10.853700  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:10.982711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:11.343386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:11.350404  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:11.353563  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:11.482503  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:11.843471  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:11.850804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:11.854394  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:11.982205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:12.344559  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:12.356493  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:12.360685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:12.482904  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:12.549689  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:12.845362  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:12.851444  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:12.854192  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:12.982144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:13.343490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:13.350748  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:13.354164  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:13.482393  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:13.843490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:13.851140  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:13.853955  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:13.981899  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:14.345075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:14.353749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:14.356306  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:14.482935  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:14.845206  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:14.851418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:14.854953  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:14.982832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:15.050718  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:15.344698  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:15.352147  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:15.354814  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:15.482914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:15.843594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:15.850612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:15.854250  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:15.981822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:16.344352  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:16.355068  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:16.355223  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:16.482512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:16.843877  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:16.852165  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:16.854206  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:16.983784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:17.343952  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:17.351134  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:17.354320  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:17.482316  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:17.548165  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:17.843235  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:17.854547  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:17.854757  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:17.982635  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:18.344289  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:18.355450  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:18.355895  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:18.482831  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:18.843523  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:18.850790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:18.853690  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:18.982812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:19.343805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:19.352950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:19.354818  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:19.481408  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:19.548450  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:19.843617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:19.851698  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:19.854050  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:19.981911  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:20.343486  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:20.350919  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:20.353484  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:20.482432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:20.844059  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:20.852064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:20.853902  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:20.981606  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:21.343087  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:21.351333  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:21.353942  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:21.482870  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:21.549780  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:21.843885  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:21.851308  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:21.854091  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:21.981724  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:22.343302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:22.356583  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:22.357293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:22.482332  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:22.844030  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:22.851904  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:22.854405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:22.982430  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:23.345342  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:23.352058  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:23.354567  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:23.482502  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:23.551285  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:23.843867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:23.851747  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:23.854235  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:23.982743  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:24.349615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:24.352798  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:24.354361  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:24.484973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:24.843941  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:24.851478  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:24.854826  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:24.981950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:25.343330  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:25.352718  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:25.354399  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:25.482351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:25.844277  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:25.851538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:25.853730  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:25.982701  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:26.049014  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:26.344072  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:26.357295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:26.359202  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:26.482140  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:26.842829  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:26.851056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:26.854715  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:26.981985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:27.343390  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:27.351538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:27.354476  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:27.482428  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:27.843658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:27.850948  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:27.853870  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:27.981973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:28.049466  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:28.343518  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:28.351517  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:28.354329  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:28.482366  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:28.844149  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:28.851882  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:28.854366  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:28.982672  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:29.343695  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:29.350530  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:29.353902  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:29.482133  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:29.845869  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:29.850937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:29.854579  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:29.982047  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:30.343525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:30.358083  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:30.358549  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:30.482345  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:30.549555  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:30.844496  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:30.851255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:30.853781  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:30.981838  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:31.343630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:31.350653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:31.353761  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:31.482422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:31.843777  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:31.850969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:31.854433  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:31.982508  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:32.343540  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:32.352792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:32.355565  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:32.482895  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:32.843319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:32.851832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:32.854804  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:32.983304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:33.048824  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:33.343844  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:33.355020  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:33.358766  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:33.482999  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:33.844041  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:33.857517  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:33.861002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:33.982350  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:34.345104  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:34.357915  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:34.358115  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:34.485315  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:34.845168  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:34.853582  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:34.855171  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:34.981985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:35.049258  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:35.343168  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:35.351617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:35.357917  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:35.482389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:35.551202  503585 pod_ready.go:92] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"True"
	I0730 00:10:35.551227  503585 pod_ready.go:81] duration metric: took 3m26.508489745s for pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace to be "Ready" ...
	I0730 00:10:35.551241  503585 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ln654" in "kube-system" namespace to be "Ready" ...
	I0730 00:10:35.555391  503585 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ln654" in "kube-system" namespace has status "Ready":"True"
	I0730 00:10:35.555416  503585 pod_ready.go:81] duration metric: took 4.165642ms for pod "nvidia-device-plugin-daemonset-ln654" in "kube-system" namespace to be "Ready" ...
	I0730 00:10:35.555447  503585 pod_ready.go:38] duration metric: took 3m27.703278328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:10:35.555497  503585 api_server.go:52] waiting for apiserver process to appear ...
	I0730 00:10:35.555545  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 00:10:35.555620  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 00:10:35.603203  503585 cri.go:89] found id: "cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:35.603232  503585 cri.go:89] found id: ""
	I0730 00:10:35.603243  503585 logs.go:276] 1 containers: [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363]
	I0730 00:10:35.603298  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.607385  503585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 00:10:35.607465  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 00:10:35.647403  503585 cri.go:89] found id: "499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:35.647426  503585 cri.go:89] found id: ""
	I0730 00:10:35.647438  503585 logs.go:276] 1 containers: [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9]
	I0730 00:10:35.647499  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.651234  503585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 00:10:35.651309  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 00:10:35.686655  503585 cri.go:89] found id: "f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:35.686684  503585 cri.go:89] found id: ""
	I0730 00:10:35.686694  503585 logs.go:276] 1 containers: [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330]
	I0730 00:10:35.686763  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.690805  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 00:10:35.690875  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 00:10:35.725597  503585 cri.go:89] found id: "3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:35.725619  503585 cri.go:89] found id: ""
	I0730 00:10:35.725627  503585 logs.go:276] 1 containers: [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568]
	I0730 00:10:35.725679  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.729678  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 00:10:35.729748  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 00:10:35.764742  503585 cri.go:89] found id: "ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:35.764769  503585 cri.go:89] found id: ""
	I0730 00:10:35.764778  503585 logs.go:276] 1 containers: [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d]
	I0730 00:10:35.764844  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.769112  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 00:10:35.769186  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 00:10:35.809087  503585 cri.go:89] found id: "60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:35.809109  503585 cri.go:89] found id: ""
	I0730 00:10:35.809119  503585 logs.go:276] 1 containers: [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952]
	I0730 00:10:35.809184  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.813304  503585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 00:10:35.813387  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 00:10:35.845044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:35.852762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:35.855462  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:35.861476  503585 cri.go:89] found id: ""
	I0730 00:10:35.861498  503585 logs.go:276] 0 containers: []
	W0730 00:10:35.861508  503585 logs.go:278] No container was found matching "kindnet"
	I0730 00:10:35.861521  503585 logs.go:123] Gathering logs for kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] ...
	I0730 00:10:35.861539  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:35.895544  503585 logs.go:123] Gathering logs for container status ...
	I0730 00:10:35.895578  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 00:10:35.942760  503585 logs.go:123] Gathering logs for dmesg ...
	I0730 00:10:35.942792  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 00:10:35.957530  503585 logs.go:123] Gathering logs for describe nodes ...
	I0730 00:10:35.957566  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 00:10:35.981982  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:36.086048  503585 logs.go:123] Gathering logs for kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] ...
	I0730 00:10:36.086090  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:36.142889  503585 logs.go:123] Gathering logs for kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] ...
	I0730 00:10:36.142921  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:36.185336  503585 logs.go:123] Gathering logs for kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] ...
	I0730 00:10:36.185371  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:36.246428  503585 logs.go:123] Gathering logs for CRI-O ...
	I0730 00:10:36.246469  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 00:10:36.344310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:36.352815  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:36.354505  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:36.482558  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:36.845109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:36.851658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:36.853927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:36.981643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:37.149702  503585 logs.go:123] Gathering logs for kubelet ...
	I0730 00:10:37.149757  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 00:10:37.227004  503585 logs.go:123] Gathering logs for etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] ...
	I0730 00:10:37.227050  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:37.272012  503585 logs.go:123] Gathering logs for coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] ...
	I0730 00:10:37.272069  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:37.344949  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:37.352071  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:37.355240  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:37.482438  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:37.844653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:37.851064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:37.853735  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:37.983425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:38.344464  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:38.355798  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:38.357710  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:38.482886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:38.844075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:38.851062  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:38.854025  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:38.982262  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:39.343227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:39.351804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:39.354242  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:39.482851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:39.815157  503585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:10:39.835113  503585 api_server.go:72] duration metric: took 3m40.494284008s to wait for apiserver process to appear ...
	I0730 00:10:39.835156  503585 api_server.go:88] waiting for apiserver healthz status ...
	I0730 00:10:39.835206  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 00:10:39.835283  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 00:10:39.843434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:39.852597  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:39.855118  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:39.873982  503585 cri.go:89] found id: "cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:39.874004  503585 cri.go:89] found id: ""
	I0730 00:10:39.874013  503585 logs.go:276] 1 containers: [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363]
	I0730 00:10:39.874094  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:39.878094  503585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 00:10:39.878171  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 00:10:39.922245  503585 cri.go:89] found id: "499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:39.922266  503585 cri.go:89] found id: ""
	I0730 00:10:39.922274  503585 logs.go:276] 1 containers: [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9]
	I0730 00:10:39.922328  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:39.926115  503585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 00:10:39.926161  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 00:10:39.959528  503585 cri.go:89] found id: "f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:39.959553  503585 cri.go:89] found id: ""
	I0730 00:10:39.959561  503585 logs.go:276] 1 containers: [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330]
	I0730 00:10:39.959615  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:39.964358  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 00:10:39.964425  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 00:10:39.982418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:39.999510  503585 cri.go:89] found id: "3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:39.999533  503585 cri.go:89] found id: ""
	I0730 00:10:39.999541  503585 logs.go:276] 1 containers: [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568]
	I0730 00:10:39.999605  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:40.003701  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 00:10:40.003770  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 00:10:40.038352  503585 cri.go:89] found id: "ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:40.038380  503585 cri.go:89] found id: ""
	I0730 00:10:40.038391  503585 logs.go:276] 1 containers: [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d]
	I0730 00:10:40.038461  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:40.042807  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 00:10:40.042871  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 00:10:40.077327  503585 cri.go:89] found id: "60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:40.077353  503585 cri.go:89] found id: ""
	I0730 00:10:40.077363  503585 logs.go:276] 1 containers: [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952]
	I0730 00:10:40.077414  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:40.081214  503585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 00:10:40.081300  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 00:10:40.114924  503585 cri.go:89] found id: ""
	I0730 00:10:40.114963  503585 logs.go:276] 0 containers: []
	W0730 00:10:40.114975  503585 logs.go:278] No container was found matching "kindnet"
	I0730 00:10:40.114987  503585 logs.go:123] Gathering logs for dmesg ...
	I0730 00:10:40.115004  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 00:10:40.128502  503585 logs.go:123] Gathering logs for kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] ...
	I0730 00:10:40.128532  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:40.169837  503585 logs.go:123] Gathering logs for kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] ...
	I0730 00:10:40.169873  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:40.210979  503585 logs.go:123] Gathering logs for kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] ...
	I0730 00:10:40.211010  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:40.265650  503585 logs.go:123] Gathering logs for CRI-O ...
	I0730 00:10:40.265699  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 00:10:40.353141  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:40.355590  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:40.360791  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:40.482739  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:40.843885  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:40.850745  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:40.854208  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:40.982371  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:41.027434  503585 logs.go:123] Gathering logs for kubelet ...
	I0730 00:10:41.027493  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 00:10:41.097577  503585 logs.go:123] Gathering logs for describe nodes ...
	I0730 00:10:41.097622  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 00:10:41.216178  503585 logs.go:123] Gathering logs for kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] ...
	I0730 00:10:41.216215  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:41.271450  503585 logs.go:123] Gathering logs for etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] ...
	I0730 00:10:41.271496  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:41.322552  503585 logs.go:123] Gathering logs for coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] ...
	I0730 00:10:41.322595  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:41.343739  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:41.352232  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:41.355513  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:41.364798  503585 logs.go:123] Gathering logs for container status ...
	I0730 00:10:41.364827  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 00:10:41.482803  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:41.844454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:41.851002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:41.853645  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:41.983144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:42.343225  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:42.353549  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:42.355297  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:42.482455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:42.843589  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:42.850533  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:42.853867  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:42.981778  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:43.343809  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:43.350945  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:43.353374  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:43.481825  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:43.844317  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:43.851663  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:43.854069  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:43.910633  503585 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I0730 00:10:43.915808  503585 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I0730 00:10:43.916853  503585 api_server.go:141] control plane version: v1.30.3
	I0730 00:10:43.916878  503585 api_server.go:131] duration metric: took 4.081714371s to wait for apiserver health ...
	I0730 00:10:43.916887  503585 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 00:10:43.916914  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 00:10:43.916965  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 00:10:43.951907  503585 cri.go:89] found id: "cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:43.951937  503585 cri.go:89] found id: ""
	I0730 00:10:43.951947  503585 logs.go:276] 1 containers: [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363]
	I0730 00:10:43.952006  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:43.955910  503585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 00:10:43.955972  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 00:10:43.982592  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:43.996176  503585 cri.go:89] found id: "499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:43.996201  503585 cri.go:89] found id: ""
	I0730 00:10:43.996212  503585 logs.go:276] 1 containers: [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9]
	I0730 00:10:43.996274  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.000468  503585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 00:10:44.000537  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 00:10:44.034889  503585 cri.go:89] found id: "f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:44.034918  503585 cri.go:89] found id: ""
	I0730 00:10:44.034929  503585 logs.go:276] 1 containers: [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330]
	I0730 00:10:44.034985  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.038959  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 00:10:44.039042  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 00:10:44.077320  503585 cri.go:89] found id: "3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:44.077344  503585 cri.go:89] found id: ""
	I0730 00:10:44.077352  503585 logs.go:276] 1 containers: [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568]
	I0730 00:10:44.077405  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.081536  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 00:10:44.081613  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 00:10:44.116042  503585 cri.go:89] found id: "ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:44.116067  503585 cri.go:89] found id: ""
	I0730 00:10:44.116075  503585 logs.go:276] 1 containers: [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d]
	I0730 00:10:44.116131  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.120107  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 00:10:44.120183  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 00:10:44.154944  503585 cri.go:89] found id: "60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:44.154973  503585 cri.go:89] found id: ""
	I0730 00:10:44.154985  503585 logs.go:276] 1 containers: [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952]
	I0730 00:10:44.155075  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.159060  503585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 00:10:44.159139  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 00:10:44.192874  503585 cri.go:89] found id: ""
	I0730 00:10:44.192902  503585 logs.go:276] 0 containers: []
	W0730 00:10:44.192911  503585 logs.go:278] No container was found matching "kindnet"
	I0730 00:10:44.192922  503585 logs.go:123] Gathering logs for coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] ...
	I0730 00:10:44.192949  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:44.228061  503585 logs.go:123] Gathering logs for kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] ...
	I0730 00:10:44.228091  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:44.270787  503585 logs.go:123] Gathering logs for kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] ...
	I0730 00:10:44.270827  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:44.335225  503585 logs.go:123] Gathering logs for CRI-O ...
	I0730 00:10:44.335262  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 00:10:44.343201  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:44.351303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:44.354890  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:44.482251  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:44.845621  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:44.851435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:44.854473  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:44.982504  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:44.996224  503585 logs.go:123] Gathering logs for etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] ...
	I0730 00:10:44.996278  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:45.037212  503585 logs.go:123] Gathering logs for kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] ...
	I0730 00:10:45.037246  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:45.070880  503585 logs.go:123] Gathering logs for container status ...
	I0730 00:10:45.070910  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 00:10:45.118520  503585 logs.go:123] Gathering logs for kubelet ...
	I0730 00:10:45.118556  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 00:10:45.189686  503585 logs.go:123] Gathering logs for dmesg ...
	I0730 00:10:45.189729  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 00:10:45.203951  503585 logs.go:123] Gathering logs for describe nodes ...
	I0730 00:10:45.203985  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 00:10:45.319722  503585 logs.go:123] Gathering logs for kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] ...
	I0730 00:10:45.319761  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:45.346303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:45.351120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:45.353855  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:45.482392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:45.843658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:45.851237  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:45.854773  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:45.983353  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:46.343138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:46.352343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:46.354797  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:46.482102  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:46.843383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:46.850680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:46.854393  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:46.990454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:47.344017  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:47.350808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:47.354409  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:47.482231  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:47.843149  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:47.850986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:47.853278  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:47.885665  503585 system_pods.go:59] 18 kube-system pods found
	I0730 00:10:47.885699  503585 system_pods.go:61] "coredns-7db6d8ff4d-lznwz" [547ad840-f72d-4dd5-b452-c9368370f5f9] Running
	I0730 00:10:47.885709  503585 system_pods.go:61] "csi-hostpath-attacher-0" [75121907-e5d8-4377-a36b-77be23e5b05d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0730 00:10:47.885719  503585 system_pods.go:61] "csi-hostpath-resizer-0" [9b84b86e-e802-4cfe-8a48-95f95a7ef99a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0730 00:10:47.885729  503585 system_pods.go:61] "csi-hostpathplugin-52djf" [6f0e9aeb-dcc9-4b01-8442-8c1f93583cea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0730 00:10:47.885736  503585 system_pods.go:61] "etcd-addons-091578" [c1861038-1f13-43ee-8e13-3d94f488ca4b] Running
	I0730 00:10:47.885742  503585 system_pods.go:61] "kube-apiserver-addons-091578" [14441c40-e373-40a1-8b22-2f7d6acfaf0c] Running
	I0730 00:10:47.885746  503585 system_pods.go:61] "kube-controller-manager-addons-091578" [84d24c29-acf6-42d5-b516-b1d852d1adfd] Running
	I0730 00:10:47.885754  503585 system_pods.go:61] "kube-ingress-dns-minikube" [7057a5f6-2896-4f06-9824-0772c339905f] Running
	I0730 00:10:47.885760  503585 system_pods.go:61] "kube-proxy-4j5tl" [d252b4fe-1396-4ebd-9108-a3a6874b8245] Running
	I0730 00:10:47.885764  503585 system_pods.go:61] "kube-scheduler-addons-091578" [a4346809-fd43-484e-b6a1-165f50b28ad8] Running
	I0730 00:10:47.885770  503585 system_pods.go:61] "metrics-server-c59844bb4-4z28f" [8efac445-c550-499b-9e0a-05b83969bc15] Running
	I0730 00:10:47.885777  503585 system_pods.go:61] "nvidia-device-plugin-daemonset-ln654" [f07b96ab-d52e-45d8-9c29-00c89fc8619e] Running
	I0730 00:10:47.885787  503585 system_pods.go:61] "registry-698f998955-mczh9" [99907a0e-3d47-408f-b8ea-3725dee9f03b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0730 00:10:47.885800  503585 system_pods.go:61] "registry-proxy-nqxzf" [613243a6-ea19-4999-ad5f-ca96c8e11bfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0730 00:10:47.885814  503585 system_pods.go:61] "snapshot-controller-745499f584-jc7wn" [b3945078-d405-4d3b-86fa-941fda4173df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.885827  503585 system_pods.go:61] "snapshot-controller-745499f584-q92j4" [fc3f1272-bf9e-40bd-9504-79a1529e0738] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.885832  503585 system_pods.go:61] "storage-provisioner" [383d9f3e-a160-4fa0-bf37-8472c0c4607c] Running
	I0730 00:10:47.885840  503585 system_pods.go:61] "tiller-deploy-6677d64bcd-7kxlp" [e02f9185-5b7f-40f5-baf0-64a0c45bc97e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0730 00:10:47.885849  503585 system_pods.go:74] duration metric: took 3.968954532s to wait for pod list to return data ...
	I0730 00:10:47.885862  503585 default_sa.go:34] waiting for default service account to be created ...
	I0730 00:10:47.887724  503585 default_sa.go:45] found service account: "default"
	I0730 00:10:47.887744  503585 default_sa.go:55] duration metric: took 1.875431ms for default service account to be created ...
	I0730 00:10:47.887751  503585 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 00:10:47.895217  503585 system_pods.go:86] 18 kube-system pods found
	I0730 00:10:47.895246  503585 system_pods.go:89] "coredns-7db6d8ff4d-lznwz" [547ad840-f72d-4dd5-b452-c9368370f5f9] Running
	I0730 00:10:47.895255  503585 system_pods.go:89] "csi-hostpath-attacher-0" [75121907-e5d8-4377-a36b-77be23e5b05d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0730 00:10:47.895262  503585 system_pods.go:89] "csi-hostpath-resizer-0" [9b84b86e-e802-4cfe-8a48-95f95a7ef99a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0730 00:10:47.895271  503585 system_pods.go:89] "csi-hostpathplugin-52djf" [6f0e9aeb-dcc9-4b01-8442-8c1f93583cea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0730 00:10:47.895283  503585 system_pods.go:89] "etcd-addons-091578" [c1861038-1f13-43ee-8e13-3d94f488ca4b] Running
	I0730 00:10:47.895288  503585 system_pods.go:89] "kube-apiserver-addons-091578" [14441c40-e373-40a1-8b22-2f7d6acfaf0c] Running
	I0730 00:10:47.895292  503585 system_pods.go:89] "kube-controller-manager-addons-091578" [84d24c29-acf6-42d5-b516-b1d852d1adfd] Running
	I0730 00:10:47.895298  503585 system_pods.go:89] "kube-ingress-dns-minikube" [7057a5f6-2896-4f06-9824-0772c339905f] Running
	I0730 00:10:47.895302  503585 system_pods.go:89] "kube-proxy-4j5tl" [d252b4fe-1396-4ebd-9108-a3a6874b8245] Running
	I0730 00:10:47.895308  503585 system_pods.go:89] "kube-scheduler-addons-091578" [a4346809-fd43-484e-b6a1-165f50b28ad8] Running
	I0730 00:10:47.895312  503585 system_pods.go:89] "metrics-server-c59844bb4-4z28f" [8efac445-c550-499b-9e0a-05b83969bc15] Running
	I0730 00:10:47.895319  503585 system_pods.go:89] "nvidia-device-plugin-daemonset-ln654" [f07b96ab-d52e-45d8-9c29-00c89fc8619e] Running
	I0730 00:10:47.895325  503585 system_pods.go:89] "registry-698f998955-mczh9" [99907a0e-3d47-408f-b8ea-3725dee9f03b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0730 00:10:47.895331  503585 system_pods.go:89] "registry-proxy-nqxzf" [613243a6-ea19-4999-ad5f-ca96c8e11bfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0730 00:10:47.895339  503585 system_pods.go:89] "snapshot-controller-745499f584-jc7wn" [b3945078-d405-4d3b-86fa-941fda4173df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.895349  503585 system_pods.go:89] "snapshot-controller-745499f584-q92j4" [fc3f1272-bf9e-40bd-9504-79a1529e0738] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.895353  503585 system_pods.go:89] "storage-provisioner" [383d9f3e-a160-4fa0-bf37-8472c0c4607c] Running
	I0730 00:10:47.895360  503585 system_pods.go:89] "tiller-deploy-6677d64bcd-7kxlp" [e02f9185-5b7f-40f5-baf0-64a0c45bc97e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0730 00:10:47.895366  503585 system_pods.go:126] duration metric: took 7.609576ms to wait for k8s-apps to be running ...
	I0730 00:10:47.895376  503585 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 00:10:47.895423  503585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:10:47.910714  503585 system_svc.go:56] duration metric: took 15.32925ms WaitForService to wait for kubelet
	I0730 00:10:47.910743  503585 kubeadm.go:582] duration metric: took 3m48.56992122s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:10:47.910766  503585 node_conditions.go:102] verifying NodePressure condition ...
	I0730 00:10:47.913597  503585 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:10:47.913624  503585 node_conditions.go:123] node cpu capacity is 2
	I0730 00:10:47.913650  503585 node_conditions.go:105] duration metric: took 2.879925ms to run NodePressure ...
	I0730 00:10:47.913662  503585 start.go:241] waiting for startup goroutines ...
	I0730 00:10:47.981679  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:48.343843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:48.357347  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:48.357970  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:48.481804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:48.844562  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:48.851410  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:48.853651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:48.982616  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:49.343608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:49.350817  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:49.353881  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:49.481761  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:49.843361  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:49.852792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:49.854061  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:49.982056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:50.344833  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:50.353855  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:50.355643  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:50.482581  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:50.844372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:50.851285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:50.853817  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:50.981719  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:51.343922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:51.350720  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:51.353887  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:51.482382  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:51.843566  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:51.852171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:51.854294  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:51.982008  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:52.343802  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:52.352540  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:52.354677  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:52.482741  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:52.843685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:52.850565  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:52.853989  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:52.982905  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:53.345793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:53.351600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:53.354452  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:53.485104  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:53.843252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:53.851943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:53.854166  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:53.981792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:54.343696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:54.355809  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:54.355952  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:54.482191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:54.843227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:54.851594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:54.854155  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:54.983411  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:55.343698  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:55.350697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:55.354089  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:55.482011  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:55.844078  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:55.851389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:55.854316  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:55.982563  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:56.343553  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:56.354106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:56.355401  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:56.482840  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:56.859972  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:56.873553  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:56.875485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:56.982568  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:57.343858  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:57.351850  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:57.363670  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:57.482466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:57.843937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:57.851921  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:57.854391  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:57.982625  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:58.344381  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:58.357977  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:58.359524  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:58.482310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:58.843351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:58.851740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:58.854095  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:58.982211  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:59.344329  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:59.354722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:59.357846  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:59.482621  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:59.843589  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:59.850910  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:59.854152  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:59.981562  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:00.343639  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:00.353145  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:00.356235  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:00.482292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:00.843203  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:00.851501  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:00.853783  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:00.981804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:01.343606  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:01.350953  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:01.355008  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:01.482221  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:01.842882  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:01.850807  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:01.855151  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:01.982062  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:02.343772  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:02.357098  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:02.357107  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:02.482268  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:02.843325  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:02.853096  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:02.854769  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:02.982576  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:03.343620  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:03.353710  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:03.355593  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:03.482293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:03.843202  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:03.850907  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:03.853553  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:03.982178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:04.343086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:04.354291  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:04.354997  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:04.481941  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:04.843733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:04.850843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:04.854149  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:04.982032  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:05.344101  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:05.351130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:05.354170  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:05.482346  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:05.844639  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:05.851973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:05.855078  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:05.982044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:06.343031  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:06.354351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:06.355296  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:06.482086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:06.843674  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:06.850591  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:06.853447  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:06.982880  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:07.344033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:07.351195  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:07.353407  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:07.482452  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:07.843724  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:07.850487  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:07.853762  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:07.982646  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:08.343804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:08.353268  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:08.355516  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:08.482113  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:08.843042  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:08.851207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:08.855070  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:08.981595  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:09.343556  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:09.356915  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:09.360138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:09.482399  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:09.843781  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:09.851670  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:09.854022  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:09.981655  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:10.343921  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:10.352925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:10.355373  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:10.482259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:10.843254  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:10.851132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:10.853710  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:10.982672  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:11.343755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:11.350734  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:11.354003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:11.481786  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:11.843572  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:11.850775  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:11.853824  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:11.982454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:12.343351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:12.355788  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:12.356016  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:12.482261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:12.843317  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:12.852015  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:12.854340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:12.982242  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:13.343341  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:13.350485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:13.353400  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:13.482533  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:13.844059  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:13.851088  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:13.853823  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:13.982252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:14.343296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:14.353564  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:14.354716  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:14.482762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:14.844423  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:14.850485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:14.853662  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:14.982288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:15.343326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:15.351085  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:15.353615  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:15.482833  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:15.843936  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:15.850919  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:15.854351  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:15.982419  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:16.345120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:16.356472  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:16.357450  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:16.482021  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:16.844071  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:16.850886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:16.853547  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:16.982181  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:17.342867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:17.350760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:17.353694  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:17.482712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:17.843600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:17.850492  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:17.853690  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:17.982804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:18.344311  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:18.353306  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:18.355614  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:18.482571  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:18.843869  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:18.850722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:18.855137  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:18.981885  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:19.343808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:19.351397  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:19.353896  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:19.482996  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:19.843864  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:19.851081  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:19.853285  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:19.982754  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:20.343709  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:20.352622  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:20.355471  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:20.482036  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:20.843555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:20.850449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:20.853363  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:20.982165  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:21.343146  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:21.352039  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:21.353849  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:21.481755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:21.844112  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:21.851057  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:21.853763  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:21.982708  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:22.344856  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:22.353866  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:22.355113  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:22.482584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:22.843932  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:22.851096  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:22.853589  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:22.982276  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:23.343902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:23.351118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:23.353765  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:23.482123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:23.843141  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:23.851136  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:23.853418  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:23.982409  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:24.343378  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:24.354954  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:24.355211  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:24.481933  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:24.844890  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:24.850557  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:24.853487  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:24.982827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:25.343766  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:25.352143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:25.354426  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:25.482788  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:25.844130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:25.851497  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:25.853717  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:25.982920  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:26.344009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:26.354278  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:26.356243  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:26.482147  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:26.843658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:26.850700  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:26.854226  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:26.982157  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:27.345699  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:27.358901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:27.359017  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:27.483227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:27.843540  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:27.850413  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:27.854157  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:27.982422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:28.343280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:28.353974  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:28.355163  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:28.483045  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:28.845208  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:28.850711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:28.854274  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:28.981987  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:29.344349  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:29.351020  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:29.354292  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:29.483259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:29.843193  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:29.851388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:29.853897  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:29.981536  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:30.343464  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:30.354940  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:30.355005  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:30.482044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:30.844171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:30.851784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:30.854260  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:30.982524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:31.343669  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:31.351554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:31.353721  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:31.482687  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:31.845044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:31.850310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:31.853852  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:31.982082  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:32.343132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:32.356009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:32.356414  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:32.482505  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:32.843650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:32.850665  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:32.853828  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:32.983700  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:33.343827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:33.350880  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:33.354240  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:33.481745  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:33.843433  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:33.851422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:33.853882  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:33.981737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:34.343580  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:34.354477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:34.356009  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:34.482767  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:34.843792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:34.850635  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:34.853822  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:34.982690  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:35.343572  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:35.351167  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:35.354026  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:35.482112  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:35.844343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:35.851249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:35.853903  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:35.982496  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:36.343099  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:36.350616  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:36.355674  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:36.482490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:36.844286  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:36.850391  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:36.854154  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:36.982171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:37.342813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:37.351025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:37.353641  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:37.483467  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:37.843615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:37.851730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:37.854361  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:37.982280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:38.343098  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:38.354931  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:38.357227  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:38.485650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:38.844552  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:38.850808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:38.853913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:38.982667  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:39.343628  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:39.350732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:39.354071  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:39.482707  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:39.843946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:39.850924  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:39.854226  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:39.981425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:40.343226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:40.354174  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:40.356123  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:40.484392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:40.843777  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:40.850726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:40.853881  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:40.981986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:41.345740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:41.350432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:41.353490  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:41.482252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:41.843746  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:41.852138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:41.854084  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:41.982574  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:42.343901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:42.355267  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:42.356159  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:42.482688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:42.843642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:42.850721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:42.853620  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:42.983605  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:43.343647  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:43.350491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:43.353379  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:43.482963  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:43.844172  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:43.850934  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:43.853730  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:43.983067  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:44.344302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:44.354746  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:44.358643  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:44.482208  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:44.843055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:44.851194  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:44.853638  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:44.982375  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:45.343641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:45.350783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:45.353729  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:45.482608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:45.844229  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:45.851340  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:45.853842  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:45.982348  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:46.343814  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:46.353735  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:46.355709  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:46.482653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:46.844042  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:46.851360  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:46.854047  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:46.981665  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:47.343553  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:47.350905  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:47.354562  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:47.482730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:47.846642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:47.851617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:47.854244  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:47.982821  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:48.348755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:48.358993  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:48.362692  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:48.482929  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:48.845986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:48.855258  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:48.855556  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:48.983374  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:49.345751  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:49.351422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:49.354407  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:49.483209  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:49.844002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:49.851581  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:49.854537  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:49.982849  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:50.344447  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:50.351881  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:50.357998  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:50.483462  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:50.844077  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:50.852387  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:50.853875  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:50.981864  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:51.344350  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:51.351990  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:51.354121  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:51.482203  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:51.843589  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:51.850812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:51.854340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:51.982119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:52.344161  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:52.358205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:52.362823  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:52.482538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:52.843627  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:52.853524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:52.855356  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:52.981926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:53.353335  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:53.357060  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:53.364900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:53.481386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:53.844774  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:53.852117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:53.854506  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:53.982078  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:54.344388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:54.355804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:54.357721  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:54.482392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:54.843819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:54.851192  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:54.853694  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:54.982749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:55.343794  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:55.350979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:55.353616  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:55.483659  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:55.844614  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:55.852200  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:55.854994  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:55.981819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:56.346448  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:56.351372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:56.354248  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:56.483356  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:56.845373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:56.851934  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:56.854609  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:56.983840  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:57.343760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:57.352189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:57.354722  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:57.482829  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:57.843784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:57.851939  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:57.854678  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:57.982302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:58.343331  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:58.356187  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:58.356227  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:58.481962  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:58.845817  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:58.851024  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:58.854593  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:58.982321  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:59.343013  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:59.350886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:59.353814  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:59.482493  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:59.843556  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:59.851479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:59.854058  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:59.982870  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:00.344123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:00.353639  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:00.355609  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:00.482010  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:00.843282  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:00.852512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:00.855192  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:00.982342  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:01.342867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:01.351741  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:01.354483  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:01.482596  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:01.844131  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:01.851227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:01.853947  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:01.981770  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:02.343808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:02.352846  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:02.356872  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:02.482612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:02.844117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:02.852304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:02.854826  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:02.981532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:03.343607  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:03.351346  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:03.354503  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:03.481578  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:03.843630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:03.852537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:03.855241  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:03.982489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:04.343253  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:04.355957  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:04.356000  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:04.482551  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:04.843723  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:04.851673  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:04.854690  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:04.982741  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:05.343944  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:05.351018  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:05.354425  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:05.481941  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:05.843998  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:05.851090  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:05.853903  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:05.981730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:06.343458  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:06.350577  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:06.357257  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:06.481878  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:06.844591  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:06.851969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:06.855088  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:06.982755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:07.343476  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:07.350610  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:07.353725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:07.482984  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:07.844667  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:07.852857  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:07.854651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:07.982363  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:08.344130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:08.360926  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:08.361253  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:08.482461  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:08.843641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:08.852622  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:08.856162  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:08.982726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:09.343974  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:09.352726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:09.354514  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:09.483161  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:09.842515  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:09.852615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:09.854532  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:09.981943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:10.343820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:10.353935  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:10.359508  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:10.482131  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:10.843191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:10.851287  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:10.854045  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:10.981749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:11.343851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:11.351491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:11.353876  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:11.481536  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:11.843697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:11.851091  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:11.853952  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:11.981676  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:12.343832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:12.350662  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:12.354456  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:12.482120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:12.843123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:12.851925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:12.853997  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:12.981776  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:13.343554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:13.350682  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:13.354017  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:13.481252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:13.843434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:13.850762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:13.854081  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:13.981962  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:14.344037  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:14.355093  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:14.358080  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:14.482242  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:14.843737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:14.851260  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:14.854963  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:14.982023  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:15.343016  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:15.359111  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:15.359232  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:15.482466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:15.843445  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:15.850828  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:15.853919  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:15.981793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:16.343722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:16.352285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:16.358548  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:16.482453  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:16.843337  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:16.851710  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:16.854739  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:16.983493  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:17.343638  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:17.350687  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:17.354062  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:17.482055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:17.846055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:17.854345  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:17.854486  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:17.982762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:18.346507  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:18.351621  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:18.359590  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:18.482333  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:18.844243  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:18.853317  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:18.855132  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:18.982361  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:19.342822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:19.354162  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:19.357958  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:19.482021  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:19.843121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:19.851473  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:19.853715  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:19.982431  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:20.343566  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:20.351680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:20.355057  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:20.481636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:20.843907  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:20.853211  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:20.855206  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:20.982090  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:21.343550  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:21.351073  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:21.353575  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:21.482636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:21.845980  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:21.853611  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:21.853820  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:21.982872  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:22.344878  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:22.351278  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:22.358963  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:22.482017  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:22.849403  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:22.862108  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:22.862343  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:22.982454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:23.343543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:23.350892  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:23.354605  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:23.482755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:23.844430  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:23.854799  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:23.855692  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:23.983034  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:24.344241  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:24.351616  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:24.357842  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:24.482379  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:24.843926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:24.851730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:24.854235  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:24.982431  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:25.344095  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:25.351193  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:25.354079  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:25.482296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:25.843468  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:25.850791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:25.854182  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:25.982331  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:26.343495  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:26.350113  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:26.355295  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:26.482122  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:26.842901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:26.851624  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:26.854399  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:26.981993  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:27.343991  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:27.350850  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:27.354305  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:27.482000  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:27.844897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:27.851532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:27.854405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:27.982671  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:28.343600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:28.354783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:28.360157  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:28.482505  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:28.843416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:28.850855  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:28.853731  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:28.982671  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:29.344150  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:29.353768  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:29.354550  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:29.482086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:29.843348  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:29.850791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:29.854362  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:29.981960  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:30.344009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:30.352387  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:30.354079  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:30.482691  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:30.843995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:30.851302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:30.853835  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:30.982426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:31.343489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:31.351226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:31.354722  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:31.482783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:31.843721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:31.850712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:31.854288  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:31.982331  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:32.343280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:32.351227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:32.357673  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:32.482526  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:32.843439  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:32.851751  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:32.854658  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:32.982674  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:33.344087  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:33.351265  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:33.354052  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:33.481455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:33.844196  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:33.851730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:33.854418  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:33.982322  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:34.343838  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:34.350515  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:34.354710  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:34.482648  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:34.843943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:34.851506  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:34.854246  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:34.983128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:35.344137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:35.351761  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:35.354535  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:35.482025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:35.844107  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:35.853426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:35.855785  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:35.982538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:36.344259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:36.351304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:36.354731  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:36.482489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:36.843479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:36.851754  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:36.854028  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:36.981877  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:37.343702  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:37.350794  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:37.353657  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:37.482615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:37.845236  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:37.851022  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:37.853421  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:37.982324  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:38.343204  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:38.352086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:38.355656  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:38.482605  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:38.844000  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:38.851826  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:38.854246  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:38.982026  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:39.343383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:39.352138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:39.355217  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:39.482462  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:39.845818  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:39.851951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:39.855291  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:39.981907  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:40.344023  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:40.354805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:40.360060  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:40.481826  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:40.843843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:40.851159  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:40.853600  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:40.982584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:41.344933  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:41.352640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:41.355412  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:41.482272  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:41.847080  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:41.852043  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:41.855074  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:41.982055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:42.342851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:42.351158  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:42.354593  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:42.482287  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:42.843416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:42.850633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:42.854340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:42.981955  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:43.344437  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:43.351085  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:43.354313  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:43.481177  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:43.844506  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:43.851975  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:43.855697  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:43.982199  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:44.344031  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:44.352697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:44.357728  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:44.483554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:44.842897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:44.850902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:44.853840  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:44.983769  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:45.344437  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:45.351247  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:45.353894  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:45.482205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:45.843524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:45.859218  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:45.859458  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:45.982295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:46.343160  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:46.351110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:46.353975  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:46.481920  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:46.844122  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:46.851403  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:46.856519  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:46.982291  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:47.343612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:47.350466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:47.353692  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:47.482564  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:47.843871  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:47.852363  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:47.858283  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:47.982219  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:48.345002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:48.351177  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:48.354510  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:48.481922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:48.846681  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:48.855143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:48.857039  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:48.981674  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:49.343267  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:49.351133  503585 kapi.go:107] duration metric: took 5m41.504647458s to wait for kubernetes.io/minikube-addons=registry ...
	I0730 00:12:49.353583  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:49.481443  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:49.843740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:49.854674  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:49.982737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:50.345136  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:50.356018  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:50.483463  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:50.844726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:50.855104  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:50.981699  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:51.345158  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:51.354955  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:51.482641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:51.844967  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:51.855044  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:51.981925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:52.344466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:52.355266  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:52.482394  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:52.843594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:52.854851  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:52.982627  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:53.343623  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:53.354542  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:53.481737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:53.844813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:53.855659  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:53.982897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:54.343737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:54.354962  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:54.482550  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:54.846732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:54.855535  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:54.982424  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:55.343668  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:55.354927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:55.482597  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:55.845740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:55.854750  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:55.982889  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:56.343633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:56.360464  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:56.482535  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:56.844293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:56.855588  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:56.982711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:57.344303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:57.355118  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:57.482339  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:57.843329  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:57.855334  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:57.982086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:58.344669  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:58.355085  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:58.483805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:58.843789  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:58.854255  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:59.105568  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:59.345928  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:59.354804  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:59.482324  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:59.843226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:59.854905  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:59.981323  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:00.343454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:00.354244  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:00.482374  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:00.843575  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:00.855155  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:00.982399  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:01.342900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:01.354698  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:01.482495  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:01.845227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:01.856824  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:01.982470  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:02.343584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:02.355371  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:02.482037  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:02.843044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:02.855962  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:02.981527  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:03.344263  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:03.354177  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:03.481183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:03.846675  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:03.856029  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:03.981670  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:04.343459  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:04.355810  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:04.482132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:04.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:04.856569  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:04.981866  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:05.344318  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:05.355562  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:05.481995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:05.844937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:05.855405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:05.982141  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:06.733907  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:06.736015  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:06.740803  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:06.843365  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:06.855122  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:06.981711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:07.344196  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:07.356255  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:07.482582  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:07.843278  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:07.850616  503585 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=ingress-nginx" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0730 00:13:07.850647  503585 kapi.go:107] duration metric: took 6m0.000251922s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0730 00:13:07.850808  503585 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0730 00:13:07.982176  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:08.343270  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:08.778514  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:08.838506  503585 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0730 00:13:08.838538  503585 kapi.go:107] duration metric: took 6m0.00051547s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0730 00:13:08.838624  503585 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0730 00:13:08.990160  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:09.482137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:09.981860  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:10.479071  503585 kapi.go:107] duration metric: took 6m0.000775326s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0730 00:13:10.479209  503585 out.go:239] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0730 00:13:10.481105  503585 out.go:177] * Enabled addons: nvidia-device-plugin, metrics-server, storage-provisioner, helm-tiller, ingress-dns, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry
	I0730 00:13:10.482442  503585 addons.go:510] duration metric: took 6m11.141560313s for enable addons: enabled=[nvidia-device-plugin metrics-server storage-provisioner helm-tiller ingress-dns inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry]
	I0730 00:13:10.482488  503585 start.go:246] waiting for cluster config update ...
	I0730 00:13:10.482517  503585 start.go:255] writing updated cluster config ...
	I0730 00:13:10.482810  503585 ssh_runner.go:195] Run: rm -f paused
	I0730 00:13:10.556870  503585 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0730 00:13:10.558756  503585 out.go:177] * Done! kubectl is now configured to use "addons-091578" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.326592091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0f70ae5-a8cd-4cbf-a27b-7447787684ef name=/runtime.v1.RuntimeService/Version
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.327773377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=785664bd-652b-4205-9dcd-fd53f103456b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.329189269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722298615329125857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580976,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=785664bd-652b-4205-9dcd-fd53f103456b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.329774540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6ab7a18-7b78-4032-b470-fdc1aba20b7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.329839645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6ab7a18-7b78-4032-b470-fdc1aba20b7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.332696231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5acfe7b0899fcc2d8584ed5af00941d1eab9628079f4acef3fdc589d030da1e9,PodSandboxId:0e4a566afd9f1e73344e88247e663b0a33f80d3048408e7121315249c486fcdd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722298476812635080,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5856dac7-4a90-4abe-aebd-099d4478d1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2ad19fd6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b1aa7bbf2f48290d8a9401a286d313c7f08e82ede06e2ac2e57e96ea5bf944,PodSandboxId:20f4bfe4e0b14ed3e56de8dbe149d0eb45881d071c40dbe9730bae9a5f64bf2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722298426178864768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d2de502-14ec-43c4-8a78-abb357c1c461,},Annotations:map[string]string{io.kubernetes.container.hash: 1d1d8d50,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0846abe1149949cc7446befbe0447ea7cc3759650089503924404ea549c9272e,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1722298415045011763,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 4fe78bfc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3954537c5ae22763a4d9f99c05458965d3bf59c166732eda98b0f5bc12e6ec1d,PodSandboxId:f698839a71d773d3c0f3fb6eda87b71985b73524ba6fb72d4a8bcea69db48a51,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1722298413484446744,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5cxwj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8f9f4161-8096-4c8a-aa52-abb408a40382,},Annotations:map[string]string{io.kube
rnetes.container.hash: 9cc529bc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fd5acdb4fdc37fa65147dde11650e99cd365f091bdf8f2b2d0aced7c848a39,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1722298410118052437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 788f214,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401664d5c9e4652867c247bf4b5da2ae605670148e608bc9b9adaed60320acd0,PodSandboxId:b54973bbbaaa280e22f5d5bcd6e72c8a1e7dd76e3cace6fd1e12829f6d0a60fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:9818b69f6e49fcc5a284c875f1e58dac1a73486b0dff869a0609145871d752a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5a3c471280784f5608f93a85c51b1d34b68689d20540689077010c90f137701a,State:CONTAINER_RUNNING,CreatedAt:1722298408178870601,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6d9
bd977d4-mf6vz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b77b403f-6ea2-45da-bd8e-2d5f4c3c8888,},Annotations:map[string]string{io.kubernetes.container.hash: e16c8bd9,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c059d666bd522445591093d87c57886bdfc4a4a70e778035d2c552fdd4d5b724,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenesspro
be@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1722298401605029501,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: debe0d0a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deebfb1ded07869fbb688feaa40de5f0ef457b499fd919274e7bda589c6292bb,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.
io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1722298400503875719,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: f1209189,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf914abe5295f6a22d90594e03695c5dacf16f9d6f4cf3bdf43c5bdf1998ceab,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca824
23f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1722298398773121992,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3b56c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05fb6240570b376a7a29dda3fa015121ee6f1673696c0c99e74bbb
f1f78e3523,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1722298396544925160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ab45a45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:5ff2f9293ebcfb1b225baa1aaccd2e15cd4840c56ab02ee24d6941a3da9837f2,PodSandboxId:ccef286fd01541aedb6b60aa259eb4d5d6c2498dece8fb514c7f8f8beb906a7b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1722298394550133134,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75121907-e5d8-4377-a36b-77be23e5b05d,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd2449f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5257117f839ffb8ab83a8e6c8e1495cb003488cae7cd72afee9e29b0f57e8dcb,PodSandboxId:85f3b4e2819fdf96cec1caa175e4a9292d027b30e1eb3210c3a06bf0c6eb50f7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1722298393036525772,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b84b86e-e802-4cfe-8a48-95f95a7ef99a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1bba97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77fe42014264d8545d4138ebbdc8bfbbe5bf65359128b00a91e312a289d1deb8,PodSandboxId:145ef63475360fcdbe56f0a06fa064435e6f1215757a14b5b88105faf7672f3f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722298391751528488,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dzc79,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1090a3cf-5082-48db-a75e-022ed3a3f789,},Annotations:map[string]string{io.kubernetes.container.hash: 26c75bce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:879fb1de98852aadc697586ca40532447f24641df71631698ee14c80c309f3a1,PodSandboxId:6c022fa2981ce44eeddc84d0d1d049fd1f3759aca905a45372a77a46269a2da4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722298391165647408,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-47xkh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 11492bee-8245-4e31-a7d2-1d63845aab38,},Annotations:map[string]string{io.kubernetes.container.hash: f700b02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5a9b5472d7206141d539ab2540965a4ac913e84d02cd0a2a9937ab90c9b8aa,PodSandboxId:8e0b8597cda2c6dd493221e04036ba5faf74d0f58361c232b23693e49389b0d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722298381815403914,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-rqmh5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 13bdd3cd-f2fd-4a90-b12e-411c4389898e,},Annotations:map[string]string{io.kubernetes.container.hash: 771ce801,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,PodSandboxId:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722298165858870583,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},Annotations:map[string]string{io.kubernetes.container.hash: 32a1acc4,io.
kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2202e7d3177cd2f406f08d30a641c7902683d37774fd6d7b1c0dd6c6894d0d5,PodSandboxId:3dd991c056b05022ed981d94700792e45f49f6f358207e4f801ba80e37d6ff49,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722298025476308914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 383d9f3e-a160-4fa0-bf37-8
472c0c4607c,},Annotations:map[string]string{io.kubernetes.container.hash: cc4bfcc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330,PodSandboxId:196728114e4d214e699ae776c959ca100278a6e4137af3b92c21e5f5d1498bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722298022418591926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lznwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547ad840-f72d-4dd5-b452-c9368370f5f9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 153bfe33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d,PodSandboxId:858fac3db89c5b2fc7935d4a966d6115111dd0ffc309e0f7d45ad0deacd9cfae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722298020025989485,Labels:map[string]string{io.kubern
etes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4j5tl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d252b4fe-1396-4ebd-9108-a3a6874b8245,},Annotations:map[string]string{io.kubernetes.container.hash: 6c307d17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568,PodSandboxId:f64ee108124b6f8f8f5b37dd4af00cef02e7134b10baf25be18c27f867a0dde1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722297999636558006,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-addons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 660a92e94986daa142ceb79dabc94a3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9,PodSandboxId:9d2fd4ffdd8e114a4253bcff4757ed292ef3001ed8d1a56ab16a479ea0c35e86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722297999650841816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-a
ddons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c118cad1ccd1c4e24767aee5e615c791,},Annotations:map[string]string{io.kubernetes.container.hash: c4e52934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952,PodSandboxId:c75cb106ac5f46b9c13c602824162e3eab0feb1285e1b3ab1d3b08b5ddff0d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722297999620409749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-addons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34d9bbdc50bcf61e9805f0bc5a836a73,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363,PodSandboxId:48b1f8b800ebc67aefbd7862cc945009bb8707d7efa0a84b7a0e6509058ba403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722297999617187084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-0
91578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815c089d704e79c0f4fb705347691acd,},Annotations:map[string]string{io.kubernetes.container.hash: a09fa173,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6ab7a18-7b78-4032-b470-fdc1aba20b7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.341522090Z" level=debug msg="Detected compression format gzip" file="compression/compression.go:126"
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.341638799Z" level=debug msg="Using original blob without modification" file="copy/compression.go:226"
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.341894191Z" level=debug msg="ImagePull (0): docker.io/kicbase/echo-server:1.0 (sha256:a055a10ed683d0944c17c642f7cf3259b524ceb32317ec887513b018e67aed1e): 0 bytes (0.00%!)(MISSING)" file="server/image_pull.go:276" id=84474050-4c2b-4801-9347-d8aa3708e5e6 name=/runtime.v1.ImageService/PullImage
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.399589712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ca534e5-c8d8-4f3c-8950-1dc23e0efce6 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.399676415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ca534e5-c8d8-4f3c-8950-1dc23e0efce6 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.400791531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a961d99-b6cc-4adf-9077-2ac6a2045b12 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.401982761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722298615401959239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580976,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a961d99-b6cc-4adf-9077-2ac6a2045b12 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.407733384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e230d61-1878-4133-b5f5-ab79cfd1ee6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.407809601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e230d61-1878-4133-b5f5-ab79cfd1ee6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.408418783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5acfe7b0899fcc2d8584ed5af00941d1eab9628079f4acef3fdc589d030da1e9,PodSandboxId:0e4a566afd9f1e73344e88247e663b0a33f80d3048408e7121315249c486fcdd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722298476812635080,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5856dac7-4a90-4abe-aebd-099d4478d1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2ad19fd6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b1aa7bbf2f48290d8a9401a286d313c7f08e82ede06e2ac2e57e96ea5bf944,PodSandboxId:20f4bfe4e0b14ed3e56de8dbe149d0eb45881d071c40dbe9730bae9a5f64bf2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722298426178864768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d2de502-14ec-43c4-8a78-abb357c1c461,},Annotations:map[string]string{io.kubernetes.container.hash: 1d1d8d50,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0846abe1149949cc7446befbe0447ea7cc3759650089503924404ea549c9272e,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1722298415045011763,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 4fe78bfc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3954537c5ae22763a4d9f99c05458965d3bf59c166732eda98b0f5bc12e6ec1d,PodSandboxId:f698839a71d773d3c0f3fb6eda87b71985b73524ba6fb72d4a8bcea69db48a51,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1722298413484446744,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5cxwj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8f9f4161-8096-4c8a-aa52-abb408a40382,},Annotations:map[string]string{io.kube
rnetes.container.hash: 9cc529bc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fd5acdb4fdc37fa65147dde11650e99cd365f091bdf8f2b2d0aced7c848a39,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1722298410118052437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 788f214,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401664d5c9e4652867c247bf4b5da2ae605670148e608bc9b9adaed60320acd0,PodSandboxId:b54973bbbaaa280e22f5d5bcd6e72c8a1e7dd76e3cace6fd1e12829f6d0a60fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:9818b69f6e49fcc5a284c875f1e58dac1a73486b0dff869a0609145871d752a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5a3c471280784f5608f93a85c51b1d34b68689d20540689077010c90f137701a,State:CONTAINER_RUNNING,CreatedAt:1722298408178870601,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6d9
bd977d4-mf6vz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b77b403f-6ea2-45da-bd8e-2d5f4c3c8888,},Annotations:map[string]string{io.kubernetes.container.hash: e16c8bd9,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c059d666bd522445591093d87c57886bdfc4a4a70e778035d2c552fdd4d5b724,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenesspro
be@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1722298401605029501,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: debe0d0a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deebfb1ded07869fbb688feaa40de5f0ef457b499fd919274e7bda589c6292bb,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.
io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1722298400503875719,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: f1209189,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf914abe5295f6a22d90594e03695c5dacf16f9d6f4cf3bdf43c5bdf1998ceab,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca824
23f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1722298398773121992,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3b56c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05fb6240570b376a7a29dda3fa015121ee6f1673696c0c99e74bbb
f1f78e3523,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1722298396544925160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ab45a45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:5ff2f9293ebcfb1b225baa1aaccd2e15cd4840c56ab02ee24d6941a3da9837f2,PodSandboxId:ccef286fd01541aedb6b60aa259eb4d5d6c2498dece8fb514c7f8f8beb906a7b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1722298394550133134,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75121907-e5d8-4377-a36b-77be23e5b05d,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd2449f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5257117f839ffb8ab83a8e6c8e1495cb003488cae7cd72afee9e29b0f57e8dcb,PodSandboxId:85f3b4e2819fdf96cec1caa175e4a9292d027b30e1eb3210c3a06bf0c6eb50f7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1722298393036525772,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b84b86e-e802-4cfe-8a48-95f95a7ef99a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1bba97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77fe42014264d8545d4138ebbdc8bfbbe5bf65359128b00a91e312a289d1deb8,PodSandboxId:145ef63475360fcdbe56f0a06fa064435e6f1215757a14b5b88105faf7672f3f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722298391751528488,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dzc79,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1090a3cf-5082-48db-a75e-022ed3a3f789,},Annotations:map[string]string{io.kubernetes.container.hash: 26c75bce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:879fb1de98852aadc697586ca40532447f24641df71631698ee14c80c309f3a1,PodSandboxId:6c022fa2981ce44eeddc84d0d1d049fd1f3759aca905a45372a77a46269a2da4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722298391165647408,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-47xkh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 11492bee-8245-4e31-a7d2-1d63845aab38,},Annotations:map[string]string{io.kubernetes.container.hash: f700b02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5a9b5472d7206141d539ab2540965a4ac913e84d02cd0a2a9937ab90c9b8aa,PodSandboxId:8e0b8597cda2c6dd493221e04036ba5faf74d0f58361c232b23693e49389b0d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722298381815403914,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-rqmh5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 13bdd3cd-f2fd-4a90-b12e-411c4389898e,},Annotations:map[string]string{io.kubernetes.container.hash: 771ce801,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,PodSandboxId:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722298165858870583,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},Annotations:map[string]string{io.kubernetes.container.hash: 32a1acc4,io.
kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2202e7d3177cd2f406f08d30a641c7902683d37774fd6d7b1c0dd6c6894d0d5,PodSandboxId:3dd991c056b05022ed981d94700792e45f49f6f358207e4f801ba80e37d6ff49,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722298025476308914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 383d9f3e-a160-4fa0-bf37-8
472c0c4607c,},Annotations:map[string]string{io.kubernetes.container.hash: cc4bfcc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330,PodSandboxId:196728114e4d214e699ae776c959ca100278a6e4137af3b92c21e5f5d1498bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722298022418591926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lznwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547ad840-f72d-4dd5-b452-c9368370f5f9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 153bfe33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d,PodSandboxId:858fac3db89c5b2fc7935d4a966d6115111dd0ffc309e0f7d45ad0deacd9cfae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722298020025989485,Labels:map[string]string{io.kubern
etes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4j5tl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d252b4fe-1396-4ebd-9108-a3a6874b8245,},Annotations:map[string]string{io.kubernetes.container.hash: 6c307d17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568,PodSandboxId:f64ee108124b6f8f8f5b37dd4af00cef02e7134b10baf25be18c27f867a0dde1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722297999636558006,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-addons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 660a92e94986daa142ceb79dabc94a3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9,PodSandboxId:9d2fd4ffdd8e114a4253bcff4757ed292ef3001ed8d1a56ab16a479ea0c35e86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722297999650841816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-a
ddons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c118cad1ccd1c4e24767aee5e615c791,},Annotations:map[string]string{io.kubernetes.container.hash: c4e52934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952,PodSandboxId:c75cb106ac5f46b9c13c602824162e3eab0feb1285e1b3ab1d3b08b5ddff0d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722297999620409749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-addons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34d9bbdc50bcf61e9805f0bc5a836a73,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363,PodSandboxId:48b1f8b800ebc67aefbd7862cc945009bb8707d7efa0a84b7a0e6509058ba403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722297999617187084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-0
91578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815c089d704e79c0f4fb705347691acd,},Annotations:map[string]string{io.kubernetes.container.hash: a09fa173,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e230d61-1878-4133-b5f5-ab79cfd1ee6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.435782399Z" level=debug msg="Applying tar in /var/lib/containers/storage/overlay/385288f36387f526d4826ab7d5cf1ab0e58bb5684a8257e8d19d9da3773b85da/diff" file="overlay/overlay.go:2160"
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.472132273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c224475c-c1ac-4f36-84a6-12506055e9d0 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.472473868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c224475c-c1ac-4f36-84a6-12506055e9d0 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.473488562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c063545d-2f86-4847-9e6d-6c518fcc264c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.474639228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722298615474614889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580976,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c063545d-2f86-4847-9e6d-6c518fcc264c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.475186350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99984e46-fd21-4219-b386-925ef4d431c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.475296851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99984e46-fd21-4219-b386-925ef4d431c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.475924940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5acfe7b0899fcc2d8584ed5af00941d1eab9628079f4acef3fdc589d030da1e9,PodSandboxId:0e4a566afd9f1e73344e88247e663b0a33f80d3048408e7121315249c486fcdd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722298476812635080,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5856dac7-4a90-4abe-aebd-099d4478d1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 2ad19fd6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b1aa7bbf2f48290d8a9401a286d313c7f08e82ede06e2ac2e57e96ea5bf944,PodSandboxId:20f4bfe4e0b14ed3e56de8dbe149d0eb45881d071c40dbe9730bae9a5f64bf2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722298426178864768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d2de502-14ec-43c4-8a78-abb357c1c461,},Annotations:map[string]string{io.kubernetes.container.hash: 1d1d8d50,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0846abe1149949cc7446befbe0447ea7cc3759650089503924404ea549c9272e,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1722298415045011763,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 4fe78bfc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3954537c5ae22763a4d9f99c05458965d3bf59c166732eda98b0f5bc12e6ec1d,PodSandboxId:f698839a71d773d3c0f3fb6eda87b71985b73524ba6fb72d4a8bcea69db48a51,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1722298413484446744,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-5cxwj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8f9f4161-8096-4c8a-aa52-abb408a40382,},Annotations:map[string]string{io.kube
rnetes.container.hash: 9cc529bc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fd5acdb4fdc37fa65147dde11650e99cd365f091bdf8f2b2d0aced7c848a39,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1722298410118052437,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 788f214,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:401664d5c9e4652867c247bf4b5da2ae605670148e608bc9b9adaed60320acd0,PodSandboxId:b54973bbbaaa280e22f5d5bcd6e72c8a1e7dd76e3cace6fd1e12829f6d0a60fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:9818b69f6e49fcc5a284c875f1e58dac1a73486b0dff869a0609145871d752a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5a3c471280784f5608f93a85c51b1d34b68689d20540689077010c90f137701a,State:CONTAINER_RUNNING,CreatedAt:1722298408178870601,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6d9
bd977d4-mf6vz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b77b403f-6ea2-45da-bd8e-2d5f4c3c8888,},Annotations:map[string]string{io.kubernetes.container.hash: e16c8bd9,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c059d666bd522445591093d87c57886bdfc4a4a70e778035d2c552fdd4d5b724,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenesspro
be@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1722298401605029501,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: debe0d0a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deebfb1ded07869fbb688feaa40de5f0ef457b499fd919274e7bda589c6292bb,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.
io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1722298400503875719,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: f1209189,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf914abe5295f6a22d90594e03695c5dacf16f9d6f4cf3bdf43c5bdf1998ceab,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca824
23f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1722298398773121992,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3b56c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05fb6240570b376a7a29dda3fa015121ee6f1673696c0c99e74bbb
f1f78e3523,PodSandboxId:2f6f4b01010d7c48598c801e31bbeca82423f7e63a6eb79c65ac7bc6b5c26b87,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1722298396544925160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-52djf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f0e9aeb-dcc9-4b01-8442-8c1f93583cea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ab45a45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:5ff2f9293ebcfb1b225baa1aaccd2e15cd4840c56ab02ee24d6941a3da9837f2,PodSandboxId:ccef286fd01541aedb6b60aa259eb4d5d6c2498dece8fb514c7f8f8beb906a7b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1722298394550133134,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75121907-e5d8-4377-a36b-77be23e5b05d,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd2449f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5257117f839ffb8ab83a8e6c8e1495cb003488cae7cd72afee9e29b0f57e8dcb,PodSandboxId:85f3b4e2819fdf96cec1caa175e4a9292d027b30e1eb3210c3a06bf0c6eb50f7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1722298393036525772,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b84b86e-e802-4cfe-8a48-95f95a7ef99a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1bba97,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77fe42014264d8545d4138ebbdc8bfbbe5bf65359128b00a91e312a289d1deb8,PodSandboxId:145ef63475360fcdbe56f0a06fa064435e6f1215757a14b5b88105faf7672f3f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722298391751528488,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dzc79,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1090a3cf-5082-48db-a75e-022ed3a3f789,},Annotations:map[string]string{io.kubernetes.container.hash: 26c75bce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:879fb1de98852aadc697586ca40532447f24641df71631698ee14c80c309f3a1,PodSandboxId:6c022fa2981ce44eeddc84d0d1d049fd1f3759aca905a45372a77a46269a2da4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722298391165647408,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-47xkh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 11492bee-8245-4e31-a7d2-1d63845aab38,},Annotations:map[string]string{io.kubernetes.container.hash: f700b02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d5a9b5472d7206141d539ab2540965a4ac913e84d02cd0a2a9937ab90c9b8aa,PodSandboxId:8e0b8597cda2c6dd493221e04036ba5faf74d0f58361c232b23693e49389b0d8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722298381815403914,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-rqmh5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 13bdd3cd-f2fd-4a90-b12e-411c4389898e,},Annotations:map[string]string{io.kubernetes.container.hash: 771ce801,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,PodSandboxId:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722298165858870583,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},Annotations:map[string]string{io.kubernetes.container.hash: 32a1acc4,io.
kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2202e7d3177cd2f406f08d30a641c7902683d37774fd6d7b1c0dd6c6894d0d5,PodSandboxId:3dd991c056b05022ed981d94700792e45f49f6f358207e4f801ba80e37d6ff49,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722298025476308914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 383d9f3e-a160-4fa0-bf37-8
472c0c4607c,},Annotations:map[string]string{io.kubernetes.container.hash: cc4bfcc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330,PodSandboxId:196728114e4d214e699ae776c959ca100278a6e4137af3b92c21e5f5d1498bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722298022418591926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lznwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547ad840-f72d-4dd5-b452-c9368370f5f9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 153bfe33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d,PodSandboxId:858fac3db89c5b2fc7935d4a966d6115111dd0ffc309e0f7d45ad0deacd9cfae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722298020025989485,Labels:map[string]string{io.kubern
etes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4j5tl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d252b4fe-1396-4ebd-9108-a3a6874b8245,},Annotations:map[string]string{io.kubernetes.container.hash: 6c307d17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568,PodSandboxId:f64ee108124b6f8f8f5b37dd4af00cef02e7134b10baf25be18c27f867a0dde1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722297999636558006,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-addons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 660a92e94986daa142ceb79dabc94a3b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9,PodSandboxId:9d2fd4ffdd8e114a4253bcff4757ed292ef3001ed8d1a56ab16a479ea0c35e86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722297999650841816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-a
ddons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c118cad1ccd1c4e24767aee5e615c791,},Annotations:map[string]string{io.kubernetes.container.hash: c4e52934,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952,PodSandboxId:c75cb106ac5f46b9c13c602824162e3eab0feb1285e1b3ab1d3b08b5ddff0d34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722297999620409749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-addons-091578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34d9bbdc50bcf61e9805f0bc5a836a73,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363,PodSandboxId:48b1f8b800ebc67aefbd7862cc945009bb8707d7efa0a84b7a0e6509058ba403,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722297999617187084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-0
91578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815c089d704e79c0f4fb705347691acd,},Annotations:map[string]string{io.kubernetes.container.hash: a09fa173,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99984e46-fd21-4219-b386-925ef4d431c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:16:55 addons-091578 crio[684]: time="2024-07-30 00:16:55.479756475Z" level=debug msg="received signal" file="crio/main.go:57" signal="broken pipe"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	5acfe7b0899fc       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                                              2 minutes ago       Running             nginx                                    0                   0e4a566afd9f1       nginx
	96b1aa7bbf2f4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   20f4bfe4e0b14       busybox
	0846abe114994       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	3954537c5ae22       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 3 minutes ago       Running             gcp-auth                                 0                   f698839a71d77       gcp-auth-5db96cd9b4-5cxwj
	46fd5acdb4fdc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	401664d5c9e46       registry.k8s.io/ingress-nginx/controller@sha256:9818b69f6e49fcc5a284c875f1e58dac1a73486b0dff869a0609145871d752a9                             3 minutes ago       Running             controller                               0                   b54973bbbaaa2       ingress-nginx-controller-6d9bd977d4-mf6vz
	c059d666bd522       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	deebfb1ded078       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	bf914abe5295f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	05fb6240570b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	5ff2f9293ebcf       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   ccef286fd0154       csi-hostpath-attacher-0
	5257117f839ff       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   85f3b4e2819fd       csi-hostpath-resizer-0
	77fe42014264d       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                                             3 minutes ago       Exited              patch                                    1                   145ef63475360       ingress-nginx-admission-patch-dzc79
	879fb1de98852       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2                   3 minutes ago       Exited              create                                   0                   6c022fa2981ce       ingress-nginx-admission-create-47xkh
	9d5a9b5472d72       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   8e0b8597cda2c       local-path-provisioner-8d985888d-rqmh5
	6a2d08297396a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872                        7 minutes ago       Running             metrics-server                           0                   f5db33d35758e       metrics-server-c59844bb4-4z28f
	d2202e7d3177c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             9 minutes ago       Running             storage-provisioner                      0                   3dd991c056b05       storage-provisioner
	f0506da1a2ae3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                                             9 minutes ago       Running             coredns                                  0                   196728114e4d2       coredns-7db6d8ff4d-lznwz
	ca15b02295bfe       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                                             9 minutes ago       Running             kube-proxy                               0                   858fac3db89c5       kube-proxy-4j5tl
	499733049fe68       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                                             10 minutes ago      Running             etcd                                     0                   9d2fd4ffdd8e1       etcd-addons-091578
	3ee890a84b948       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                                             10 minutes ago      Running             kube-scheduler                           0                   f64ee108124b6       kube-scheduler-addons-091578
	60041ecdf7b4c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                                             10 minutes ago      Running             kube-controller-manager                  0                   c75cb106ac5f4       kube-controller-manager-addons-091578
	cdb96aea78f76       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                                             10 minutes ago      Running             kube-apiserver                           0                   48b1f8b800ebc       kube-apiserver-addons-091578
	
	
	==> coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] <==
	[INFO] 10.244.0.8:50828 - 33092 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000263813s
	[INFO] 10.244.0.8:58219 - 7997 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120869s
	[INFO] 10.244.0.8:36944 - 42042 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104488s
	[INFO] 10.244.0.8:36944 - 35380 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100083s
	[INFO] 10.244.0.8:33556 - 5150 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000061005s
	[INFO] 10.244.0.8:33556 - 62493 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067952s
	[INFO] 10.244.0.8:58219 - 33343 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000247708s
	[INFO] 10.244.0.8:60304 - 41677 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088978s
	[INFO] 10.244.0.8:60304 - 46032 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000063832s
	[INFO] 10.244.0.8:56381 - 48491 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047019s
	[INFO] 10.244.0.8:56381 - 25685 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003828s
	[INFO] 10.244.0.8:42630 - 2173 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041324s
	[INFO] 10.244.0.8:42630 - 25471 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000037471s
	[INFO] 10.244.0.8:49604 - 23054 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00004359s
	[INFO] 10.244.0.8:49604 - 52488 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046187s
	[INFO] 10.244.0.22:35861 - 59286 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001203712s
	[INFO] 10.244.0.22:38480 - 41814 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000590977s
	[INFO] 10.244.0.22:41469 - 6959 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065748s
	[INFO] 10.244.0.22:35426 - 56467 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000314292s
	[INFO] 10.244.0.22:52095 - 9670 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077414s
	[INFO] 10.244.0.22:45540 - 26189 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000283411s
	[INFO] 10.244.0.22:59024 - 60615 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000630789s
	[INFO] 10.244.0.22:46544 - 43440 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000540706s
	[INFO] 10.244.0.26:38295 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000436027s
	[INFO] 10.244.0.26:33434 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117078s
	
	
	==> describe nodes <==
	Name:               addons-091578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-091578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=addons-091578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_06_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-091578
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-091578"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:06:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-091578
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:16:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:14:54 +0000   Tue, 30 Jul 2024 00:06:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:14:54 +0000   Tue, 30 Jul 2024 00:06:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:14:54 +0000   Tue, 30 Jul 2024 00:06:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:14:54 +0000   Tue, 30 Jul 2024 00:06:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-091578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2175b4f65f5b42f89841ba61d88b3014
	  System UUID:                2175b4f6-5f5b-42f8-9841-ba61d88b3014
	  Boot ID:                    ff39aba3-5037-47b0-bfbc-125a8399a9e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  default                     hello-world-app-6778b5fc9f-jww7v             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  gcp-auth                    gcp-auth-5db96cd9b4-5cxwj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m45s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-mf6vz    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         9m48s
	  kube-system                 coredns-7db6d8ff4d-lznwz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     9m56s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 csi-hostpathplugin-52djf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 etcd-addons-091578                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kube-apiserver-addons-091578                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-addons-091578        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-4j5tl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-scheduler-addons-091578                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 metrics-server-c59844bb4-4z28f               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         9m51s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  local-path-storage          local-path-provisioner-8d985888d-rqmh5       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (12%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m54s              kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-091578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-091578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-091578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-091578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-091578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-091578 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-091578 status is now: NodeReady
	  Normal  RegisteredNode           9m57s              node-controller  Node addons-091578 event: Registered Node addons-091578 in Controller
	
	
	==> dmesg <==
	[  +0.075855] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.293820] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	[  +0.149691] kauditd_printk_skb: 21 callbacks suppressed
	[Jul30 00:07] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.313374] kauditd_printk_skb: 158 callbacks suppressed
	[Jul30 00:09] kauditd_printk_skb: 43 callbacks suppressed
	[Jul30 00:10] kauditd_printk_skb: 4 callbacks suppressed
	[Jul30 00:12] kauditd_printk_skb: 2 callbacks suppressed
	[Jul30 00:13] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.913028] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.005842] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.505645] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.711362] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.063260] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.908418] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.252850] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.986225] kauditd_printk_skb: 17 callbacks suppressed
	[Jul30 00:14] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.009539] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.004222] kauditd_printk_skb: 59 callbacks suppressed
	[  +9.652505] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.135475] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.809767] kauditd_printk_skb: 34 callbacks suppressed
	[ +12.956189] kauditd_printk_skb: 7 callbacks suppressed
	[Jul30 00:16] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] <==
	{"level":"warn","ts":"2024-07-30T00:13:29.845108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:13:29.52951Z","time spent":"315.535664ms","remote":"127.0.0.1:53868","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-5gszhrclg236pq3nh3xxg2ls24\" mod_revision:1389 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-5gszhrclg236pq3nh3xxg2ls24\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-5gszhrclg236pq3nh3xxg2ls24\" > >"}
	{"level":"info","ts":"2024-07-30T00:13:29.845152Z","caller":"traceutil/trace.go:171","msg":"trace[1970816166] linearizableReadLoop","detail":"{readStateIndex:1549; appliedIndex:1549; }","duration":"190.890476ms","start":"2024-07-30T00:13:29.654247Z","end":"2024-07-30T00:13:29.845138Z","steps":["trace[1970816166] 'read index received'  (duration: 190.881847ms)","trace[1970816166] 'applied index is now lower than readState.Index'  (duration: 7.174µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T00:13:29.846005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.741718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T00:13:29.84608Z","caller":"traceutil/trace.go:171","msg":"trace[1427918091] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1450; }","duration":"191.850091ms","start":"2024-07-30T00:13:29.654222Z","end":"2024-07-30T00:13:29.846072Z","steps":["trace[1427918091] 'agreement among raft nodes before linearized reading'  (duration: 190.965308ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:13:30.039968Z","caller":"traceutil/trace.go:171","msg":"trace[977606159] transaction","detail":"{read_only:false; response_revision:1452; number_of_response:1; }","duration":"191.018514ms","start":"2024-07-30T00:13:29.848931Z","end":"2024-07-30T00:13:30.039949Z","steps":["trace[977606159] 'process raft request'  (duration: 190.301985ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:13:30.040183Z","caller":"traceutil/trace.go:171","msg":"trace[1145085867] transaction","detail":"{read_only:false; response_revision:1451; number_of_response:1; }","duration":"350.836892ms","start":"2024-07-30T00:13:29.689307Z","end":"2024-07-30T00:13:30.040144Z","steps":["trace[1145085867] 'process raft request'  (duration: 348.385941ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:13:30.040265Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:13:29.689291Z","time spent":"350.933459ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-6d9bd977d4-mf6vz.17e6d5474a973613\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-6d9bd977d4-mf6vz.17e6d5474a973613\" value_size:675 lease:6697856504141342883 >> failure:<>"}
	{"level":"info","ts":"2024-07-30T00:13:42.125724Z","caller":"traceutil/trace.go:171","msg":"trace[946108652] transaction","detail":"{read_only:false; response_revision:1513; number_of_response:1; }","duration":"193.623085ms","start":"2024-07-30T00:13:41.932074Z","end":"2024-07-30T00:13:42.125697Z","steps":["trace[946108652] 'process raft request'  (duration: 193.463048ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:14:19.012088Z","caller":"traceutil/trace.go:171","msg":"trace[894129127] linearizableReadLoop","detail":"{readStateIndex:1881; appliedIndex:1880; }","duration":"406.90416ms","start":"2024-07-30T00:14:18.605151Z","end":"2024-07-30T00:14:19.012055Z","steps":["trace[894129127] 'read index received'  (duration: 400.787649ms)","trace[894129127] 'applied index is now lower than readState.Index'  (duration: 6.115497ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T00:14:19.012521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.926006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5997"}
	{"level":"info","ts":"2024-07-30T00:14:19.012592Z","caller":"traceutil/trace.go:171","msg":"trace[1862980982] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1765; }","duration":"158.02703ms","start":"2024-07-30T00:14:18.854554Z","end":"2024-07-30T00:14:19.012581Z","steps":["trace[1862980982] 'agreement among raft nodes before linearized reading'  (duration: 157.879296ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:14:19.01251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"407.149224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-07-30T00:14:19.012738Z","caller":"traceutil/trace.go:171","msg":"trace[939603819] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1765; }","duration":"407.609043ms","start":"2024-07-30T00:14:18.605118Z","end":"2024-07-30T00:14:19.012727Z","steps":["trace[939603819] 'agreement among raft nodes before linearized reading'  (duration: 407.032932ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:14:19.012786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:14:18.605105Z","time spent":"407.663983ms","remote":"127.0.0.1:53868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-07-30T00:14:24.113237Z","caller":"traceutil/trace.go:171","msg":"trace[221579905] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1903; }","duration":"256.806987ms","start":"2024-07-30T00:14:23.856417Z","end":"2024-07-30T00:14:24.113224Z","steps":["trace[221579905] 'read index received'  (duration: 256.680755ms)","trace[221579905] 'applied index is now lower than readState.Index'  (duration: 125.816µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-30T00:14:24.113498Z","caller":"traceutil/trace.go:171","msg":"trace[250618291] transaction","detail":"{read_only:false; response_revision:1787; number_of_response:1; }","duration":"272.788921ms","start":"2024-07-30T00:14:23.840699Z","end":"2024-07-30T00:14:24.113488Z","steps":["trace[250618291] 'process raft request'  (duration: 272.438756ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:14:24.113721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.258173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5997"}
	{"level":"info","ts":"2024-07-30T00:14:24.113744Z","caller":"traceutil/trace.go:171","msg":"trace[1898224654] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1787; }","duration":"257.346306ms","start":"2024-07-30T00:14:23.856391Z","end":"2024-07-30T00:14:24.113737Z","steps":["trace[1898224654] 'agreement among raft nodes before linearized reading'  (duration: 257.214377ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:14:36.704624Z","caller":"traceutil/trace.go:171","msg":"trace[1690797861] transaction","detail":"{read_only:false; response_revision:1898; number_of_response:1; }","duration":"227.590844ms","start":"2024-07-30T00:14:36.477018Z","end":"2024-07-30T00:14:36.704609Z","steps":["trace[1690797861] 'process raft request'  (duration: 227.192188ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:14:36.705065Z","caller":"traceutil/trace.go:171","msg":"trace[1770232593] linearizableReadLoop","detail":"{readStateIndex:2020; appliedIndex:2019; }","duration":"175.344969ms","start":"2024-07-30T00:14:36.528955Z","end":"2024-07-30T00:14:36.7043Z","steps":["trace[1770232593] 'read index received'  (duration: 175.184409ms)","trace[1770232593] 'applied index is now lower than readState.Index'  (duration: 160.068µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T00:14:36.705521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.563453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-07-30T00:14:36.709044Z","caller":"traceutil/trace.go:171","msg":"trace[237935070] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1898; }","duration":"180.116607ms","start":"2024-07-30T00:14:36.528916Z","end":"2024-07-30T00:14:36.709033Z","steps":["trace[237935070] 'agreement among raft nodes before linearized reading'  (duration: 176.507125ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:16:40.322924Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2024-07-30T00:16:40.395557Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1175,"took":"71.897778ms","hash":3698398867,"current-db-size-bytes":8245248,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":5062656,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-07-30T00:16:40.395746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3698398867,"revision":1175,"compact-revision":-1}
	
	
	==> kernel <==
	 00:16:55 up 10 min,  0 users,  load average: 0.49, 0.57, 0.35
	Linux addons-091578 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] <==
	W0730 00:13:10.612956       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.108.163.99:443: connect: connection refused
	E0730 00:13:10.613024       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.108.163.99:443: connect: connection refused
	E0730 00:13:54.036847       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:41834: use of closed network connection
	E0730 00:13:54.228609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:41870: use of closed network connection
	I0730 00:14:13.025810       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.52.11"}
	E0730 00:14:28.235032       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.214:8443->10.244.0.30:38110: read: connection reset by peer
	I0730 00:14:31.923092       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0730 00:14:32.124704       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0730 00:14:32.348719       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.128.150"}
	I0730 00:14:36.760074       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0730 00:14:37.826623       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0730 00:14:51.436527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.436575       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.465266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.465451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.486108       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.486170       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.492568       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.492612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.524091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.528640       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0730 00:14:52.493461       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0730 00:14:52.525076       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0730 00:14:52.533979       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0730 00:16:52.910849       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.59.30"}
	
	
	==> kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] <==
	W0730 00:15:12.125793       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:15:12.125855       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:15:12.790474       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:15:12.790611       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:15:28.036785       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:15:28.036952       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:15:32.773569       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:15:32.773704       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:15:34.172658       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:15:34.172704       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:15:57.637960       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:15:57.638209       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:16:01.730222       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:16:01.730259       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:16:10.637628       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:16:10.637750       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:16:24.726536       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:16:24.726596       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:16:37.867399       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:16:37.867459       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:16:43.494879       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:16:43.494927       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0730 00:16:52.769412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="51.073268ms"
	I0730 00:16:52.794467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="24.864313ms"
	I0730 00:16:52.795073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="47.803µs"
	
	
	==> kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] <==
	I0730 00:07:00.639704       1 server_linux.go:69] "Using iptables proxy"
	I0730 00:07:00.665725       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	I0730 00:07:00.783987       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:07:00.784032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:07:00.784048       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:07:00.787713       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:07:00.787904       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:07:00.787915       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:07:00.792350       1 config.go:192] "Starting service config controller"
	I0730 00:07:00.792363       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:07:00.792380       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:07:00.792383       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:07:00.792690       1 config.go:319] "Starting node config controller"
	I0730 00:07:00.792702       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:07:00.893248       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:07:00.893275       1 shared_informer.go:320] Caches are synced for service config
	I0730 00:07:00.893292       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] <==
	W0730 00:06:41.824164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 00:06:41.824198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0730 00:06:41.824180       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:41.824284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:41.824252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 00:06:41.824367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 00:06:42.701691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:42.701736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:42.707828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:42.707885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:42.901708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:06:42.901751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:06:42.950945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0730 00:06:42.951172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0730 00:06:42.959704       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 00:06:42.959750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 00:06:42.976566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:06:42.976679       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:06:43.003719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 00:06:43.003764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 00:06:43.049616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 00:06:43.049656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 00:06:43.109249       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:06:43.109289       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0730 00:06:45.213391       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 00:16:44 addons-091578 kubelet[1266]: E0730 00:16:44.340514    1266 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:16:44 addons-091578 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:16:44 addons-091578 kubelet[1266]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:16:44 addons-091578 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:16:44 addons-091578 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.764458    1266 topology_manager.go:215] "Topology Admit Handler" podUID="57295c4a-a1ae-411d-a074-b6800a5b22f4" podNamespace="default" podName="hello-world-app-6778b5fc9f-jww7v"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: E0730 00:16:52.764575    1266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c332b1b-6725-4ef3-99fa-31ee6204a88d" containerName="task-pv-container"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: E0730 00:16:52.764589    1266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="279040f4-8d6d-414e-824c-91b2e90676b4" containerName="gadget"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: E0730 00:16:52.764598    1266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3945078-d405-4d3b-86fa-941fda4173df" containerName="volume-snapshot-controller"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: E0730 00:16:52.764604    1266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc3f1272-bf9e-40bd-9504-79a1529e0738" containerName="volume-snapshot-controller"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.764652    1266 memory_manager.go:354] "RemoveStaleState removing state" podUID="279040f4-8d6d-414e-824c-91b2e90676b4" containerName="gadget"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.764663    1266 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3945078-d405-4d3b-86fa-941fda4173df" containerName="volume-snapshot-controller"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.764672    1266 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3f1272-bf9e-40bd-9504-79a1529e0738" containerName="volume-snapshot-controller"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.764678    1266 memory_manager.go:354] "RemoveStaleState removing state" podUID="279040f4-8d6d-414e-824c-91b2e90676b4" containerName="gadget"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.764685    1266 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c332b1b-6725-4ef3-99fa-31ee6204a88d" containerName="task-pv-container"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.847450    1266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2v7h\" (UniqueName: \"kubernetes.io/projected/57295c4a-a1ae-411d-a074-b6800a5b22f4-kube-api-access-p2v7h\") pod \"hello-world-app-6778b5fc9f-jww7v\" (UID: \"57295c4a-a1ae-411d-a074-b6800a5b22f4\") " pod="default/hello-world-app-6778b5fc9f-jww7v"
	Jul 30 00:16:52 addons-091578 kubelet[1266]: I0730 00:16:52.847509    1266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/57295c4a-a1ae-411d-a074-b6800a5b22f4-gcp-creds\") pod \"hello-world-app-6778b5fc9f-jww7v\" (UID: \"57295c4a-a1ae-411d-a074-b6800a5b22f4\") " pod="default/hello-world-app-6778b5fc9f-jww7v"
	Jul 30 00:16:53 addons-091578 kubelet[1266]: I0730 00:16:53.958292    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd56g\" (UniqueName: \"kubernetes.io/projected/7057a5f6-2896-4f06-9824-0772c339905f-kube-api-access-jd56g\") pod \"7057a5f6-2896-4f06-9824-0772c339905f\" (UID: \"7057a5f6-2896-4f06-9824-0772c339905f\") "
	Jul 30 00:16:53 addons-091578 kubelet[1266]: I0730 00:16:53.960517    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7057a5f6-2896-4f06-9824-0772c339905f-kube-api-access-jd56g" (OuterVolumeSpecName: "kube-api-access-jd56g") pod "7057a5f6-2896-4f06-9824-0772c339905f" (UID: "7057a5f6-2896-4f06-9824-0772c339905f"). InnerVolumeSpecName "kube-api-access-jd56g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 30 00:16:53 addons-091578 kubelet[1266]: I0730 00:16:53.995624    1266 scope.go:117] "RemoveContainer" containerID="57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da"
	Jul 30 00:16:54 addons-091578 kubelet[1266]: I0730 00:16:54.023280    1266 scope.go:117] "RemoveContainer" containerID="57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da"
	Jul 30 00:16:54 addons-091578 kubelet[1266]: E0730 00:16:54.024306    1266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da\": container with ID starting with 57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da not found: ID does not exist" containerID="57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da"
	Jul 30 00:16:54 addons-091578 kubelet[1266]: I0730 00:16:54.024465    1266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da"} err="failed to get container status \"57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da\": rpc error: code = NotFound desc = could not find container \"57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da\": container with ID starting with 57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da not found: ID does not exist"
	Jul 30 00:16:54 addons-091578 kubelet[1266]: I0730 00:16:54.059351    1266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jd56g\" (UniqueName: \"kubernetes.io/projected/7057a5f6-2896-4f06-9824-0772c339905f-kube-api-access-jd56g\") on node \"addons-091578\" DevicePath \"\""
	Jul 30 00:16:54 addons-091578 kubelet[1266]: I0730 00:16:54.339264    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7057a5f6-2896-4f06-9824-0772c339905f" path="/var/lib/kubelet/pods/7057a5f6-2896-4f06-9824-0772c339905f/volumes"
	
	
	==> storage-provisioner [d2202e7d3177cd2f406f08d30a641c7902683d37774fd6d7b1c0dd6c6894d0d5] <==
	I0730 00:07:05.781118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 00:07:06.276859       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 00:07:06.276929       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0730 00:07:06.320535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0730 00:07:06.320699       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-091578_ebc3b303-5950-46c1-89f3-8f9726695c90!
	I0730 00:07:06.320741       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5173fa91-320a-41ce-b9af-0e8c9dc5a9ac", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-091578_ebc3b303-5950-46c1-89f3-8f9726695c90 became leader
	I0730 00:07:06.637388       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-091578_ebc3b303-5950-46c1-89f3-8f9726695c90!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-091578 -n addons-091578
helpers_test.go:261: (dbg) Run:  kubectl --context addons-091578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-091578 describe pod ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-091578 describe pod ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79: exit status 1 (59.359342ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-47xkh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dzc79" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-091578 describe pod ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79: exit status 1
--- FAIL: TestAddons/parallel/Ingress (145.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (364.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.931047ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-4z28f" [8efac445-c550-499b-9e0a-05b83969bc15] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005419944s
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (85.063955ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 7m17.369190448s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (78.752ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 7m20.370070835s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (66.482164ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 7m24.322489785s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (85.335361ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 7m28.572005834s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (77.79086ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 7m38.998209004s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (65.637052ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 7m55.20710723s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (64.340241ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 8m11.18688389s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (66.490016ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 8m46.107935354s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (88.191503ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 9m57.683625344s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (66.665968ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 10m49.811736886s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (67.533308ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 11m25.392180318s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (68.439542ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 12m33.978911334s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-091578 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-091578 top pods -n kube-system: exit status 1 (68.068207ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-lznwz, age: 13m12.302682092s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-091578 -n addons-091578
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-091578 logs -n 25: (1.508358389s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-232646                                                                     | download-only-232646 | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC | 30 Jul 24 00:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-248146 | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC |                     |
	|         | binary-mirror-248146                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38989                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-248146                                                                     | binary-mirror-248146 | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC | 30 Jul 24 00:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC |                     |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC |                     |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-091578 --wait=true                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:06 UTC | 30 Jul 24 00:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:13 UTC | 30 Jul 24 00:13 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:13 UTC | 30 Jul 24 00:14 UTC |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-091578 ssh cat                                                                       | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | /opt/local-path-provisioner/pvc-f03646c2-17c5-467c-9078-e8eb4c5ef372_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-091578 ip                                                                            | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | -p addons-091578                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | -p addons-091578                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | addons-091578                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-091578 ssh curl -s                                                                   | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-091578 addons                                                                        | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-091578 addons                                                                        | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:14 UTC | 30 Jul 24 00:14 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-091578 ip                                                                            | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:16 UTC | 30 Jul 24 00:16 UTC |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:16 UTC | 30 Jul 24 00:16 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-091578 addons disable                                                                | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:16 UTC | 30 Jul 24 00:16 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-091578 addons                                                                        | addons-091578        | jenkins | v1.33.1 | 30 Jul 24 00:20 UTC | 30 Jul 24 00:20 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:06:04
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:06:04.067602  503585 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:06:04.067877  503585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:06:04.067887  503585 out.go:304] Setting ErrFile to fd 2...
	I0730 00:06:04.067892  503585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:06:04.068081  503585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:06:04.068780  503585 out.go:298] Setting JSON to false
	I0730 00:06:04.069698  503585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6506,"bootTime":1722291458,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:06:04.069760  503585 start.go:139] virtualization: kvm guest
	I0730 00:06:04.071938  503585 out.go:177] * [addons-091578] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:06:04.073318  503585 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:06:04.073379  503585 notify.go:220] Checking for updates...
	I0730 00:06:04.075971  503585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:06:04.077422  503585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:06:04.078580  503585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:06:04.079815  503585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:06:04.080994  503585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:06:04.082458  503585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:06:04.114825  503585 out.go:177] * Using the kvm2 driver based on user configuration
	I0730 00:06:04.116132  503585 start.go:297] selected driver: kvm2
	I0730 00:06:04.116145  503585 start.go:901] validating driver "kvm2" against <nil>
	I0730 00:06:04.116158  503585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:06:04.116959  503585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:06:04.117062  503585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:06:04.133091  503585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:06:04.133148  503585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 00:06:04.133403  503585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:06:04.133475  503585 cni.go:84] Creating CNI manager for ""
	I0730 00:06:04.133493  503585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:06:04.133506  503585 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0730 00:06:04.133582  503585 start.go:340] cluster config:
	{Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:06:04.133725  503585 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:06:04.135650  503585 out.go:177] * Starting "addons-091578" primary control-plane node in "addons-091578" cluster
	I0730 00:06:04.136863  503585 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:06:04.136902  503585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:06:04.136917  503585 cache.go:56] Caching tarball of preloaded images
	I0730 00:06:04.137035  503585 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:06:04.137049  503585 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:06:04.138283  503585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/config.json ...
	I0730 00:06:04.138332  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/config.json: {Name:mka41c8a1a5a7058f81c0b1b0ebe27d61d42132f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:04.138502  503585 start.go:360] acquireMachinesLock for addons-091578: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:06:04.138557  503585 start.go:364] duration metric: took 35.748µs to acquireMachinesLock for "addons-091578"
	I0730 00:06:04.138577  503585 start.go:93] Provisioning new machine with config: &{Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:06:04.138684  503585 start.go:125] createHost starting for "" (driver="kvm2")
	I0730 00:06:04.140460  503585 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0730 00:06:04.140601  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:04.140634  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:04.155348  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0730 00:06:04.155869  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:04.156429  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:04.156449  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:04.156812  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:04.157001  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:04.157263  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:04.157457  503585 start.go:159] libmachine.API.Create for "addons-091578" (driver="kvm2")
	I0730 00:06:04.157486  503585 client.go:168] LocalClient.Create starting
	I0730 00:06:04.157522  503585 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:06:04.269943  503585 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:06:04.406430  503585 main.go:141] libmachine: Running pre-create checks...
	I0730 00:06:04.406456  503585 main.go:141] libmachine: (addons-091578) Calling .PreCreateCheck
	I0730 00:06:04.407044  503585 main.go:141] libmachine: (addons-091578) Calling .GetConfigRaw
	I0730 00:06:04.407624  503585 main.go:141] libmachine: Creating machine...
	I0730 00:06:04.407644  503585 main.go:141] libmachine: (addons-091578) Calling .Create
	I0730 00:06:04.407954  503585 main.go:141] libmachine: (addons-091578) Creating KVM machine...
	I0730 00:06:04.409192  503585 main.go:141] libmachine: (addons-091578) DBG | found existing default KVM network
	I0730 00:06:04.411724  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:04.409985  503607 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002046d0}
	I0730 00:06:04.411756  503585 main.go:141] libmachine: (addons-091578) DBG | created network xml: 
	I0730 00:06:04.411779  503585 main.go:141] libmachine: (addons-091578) DBG | <network>
	I0730 00:06:04.411794  503585 main.go:141] libmachine: (addons-091578) DBG |   <name>mk-addons-091578</name>
	I0730 00:06:04.411811  503585 main.go:141] libmachine: (addons-091578) DBG |   <dns enable='no'/>
	I0730 00:06:04.411827  503585 main.go:141] libmachine: (addons-091578) DBG |   
	I0730 00:06:04.411843  503585 main.go:141] libmachine: (addons-091578) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0730 00:06:04.411857  503585 main.go:141] libmachine: (addons-091578) DBG |     <dhcp>
	I0730 00:06:04.411880  503585 main.go:141] libmachine: (addons-091578) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0730 00:06:04.411892  503585 main.go:141] libmachine: (addons-091578) DBG |     </dhcp>
	I0730 00:06:04.411912  503585 main.go:141] libmachine: (addons-091578) DBG |   </ip>
	I0730 00:06:04.411928  503585 main.go:141] libmachine: (addons-091578) DBG |   
	I0730 00:06:04.411982  503585 main.go:141] libmachine: (addons-091578) DBG | </network>
	I0730 00:06:04.412023  503585 main.go:141] libmachine: (addons-091578) DBG | 
	I0730 00:06:04.416798  503585 main.go:141] libmachine: (addons-091578) DBG | trying to create private KVM network mk-addons-091578 192.168.39.0/24...
	I0730 00:06:04.494036  503585 main.go:141] libmachine: (addons-091578) DBG | private KVM network mk-addons-091578 192.168.39.0/24 created
	I0730 00:06:04.494075  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:04.494007  503607 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:06:04.494090  503585 main.go:141] libmachine: (addons-091578) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578 ...
	I0730 00:06:04.494134  503585 main.go:141] libmachine: (addons-091578) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:06:04.494250  503585 main.go:141] libmachine: (addons-091578) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:06:04.804358  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:04.804213  503607 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa...
	I0730 00:06:05.032440  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:05.032279  503607 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/addons-091578.rawdisk...
	I0730 00:06:05.032472  503585 main.go:141] libmachine: (addons-091578) DBG | Writing magic tar header
	I0730 00:06:05.032483  503585 main.go:141] libmachine: (addons-091578) DBG | Writing SSH key tar header
	I0730 00:06:05.032491  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:05.032406  503607 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578 ...
	I0730 00:06:05.032507  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578
	I0730 00:06:05.032589  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:06:05.032608  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:06:05.032617  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578 (perms=drwx------)
	I0730 00:06:05.032667  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:06:05.032692  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:06:05.032701  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:06:05.032740  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:06:05.032749  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:06:05.032758  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:06:05.032769  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:06:05.032786  503585 main.go:141] libmachine: (addons-091578) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:06:05.032792  503585 main.go:141] libmachine: (addons-091578) DBG | Checking permissions on dir: /home
	I0730 00:06:05.032800  503585 main.go:141] libmachine: (addons-091578) DBG | Skipping /home - not owner
	I0730 00:06:05.032809  503585 main.go:141] libmachine: (addons-091578) Creating domain...
	I0730 00:06:05.033700  503585 main.go:141] libmachine: (addons-091578) define libvirt domain using xml: 
	I0730 00:06:05.033727  503585 main.go:141] libmachine: (addons-091578) <domain type='kvm'>
	I0730 00:06:05.033738  503585 main.go:141] libmachine: (addons-091578)   <name>addons-091578</name>
	I0730 00:06:05.033750  503585 main.go:141] libmachine: (addons-091578)   <memory unit='MiB'>4000</memory>
	I0730 00:06:05.033762  503585 main.go:141] libmachine: (addons-091578)   <vcpu>2</vcpu>
	I0730 00:06:05.033773  503585 main.go:141] libmachine: (addons-091578)   <features>
	I0730 00:06:05.033783  503585 main.go:141] libmachine: (addons-091578)     <acpi/>
	I0730 00:06:05.033794  503585 main.go:141] libmachine: (addons-091578)     <apic/>
	I0730 00:06:05.033803  503585 main.go:141] libmachine: (addons-091578)     <pae/>
	I0730 00:06:05.033811  503585 main.go:141] libmachine: (addons-091578)     
	I0730 00:06:05.033820  503585 main.go:141] libmachine: (addons-091578)   </features>
	I0730 00:06:05.033833  503585 main.go:141] libmachine: (addons-091578)   <cpu mode='host-passthrough'>
	I0730 00:06:05.033858  503585 main.go:141] libmachine: (addons-091578)   
	I0730 00:06:05.033886  503585 main.go:141] libmachine: (addons-091578)   </cpu>
	I0730 00:06:05.033899  503585 main.go:141] libmachine: (addons-091578)   <os>
	I0730 00:06:05.033909  503585 main.go:141] libmachine: (addons-091578)     <type>hvm</type>
	I0730 00:06:05.033920  503585 main.go:141] libmachine: (addons-091578)     <boot dev='cdrom'/>
	I0730 00:06:05.033930  503585 main.go:141] libmachine: (addons-091578)     <boot dev='hd'/>
	I0730 00:06:05.033942  503585 main.go:141] libmachine: (addons-091578)     <bootmenu enable='no'/>
	I0730 00:06:05.033955  503585 main.go:141] libmachine: (addons-091578)   </os>
	I0730 00:06:05.033967  503585 main.go:141] libmachine: (addons-091578)   <devices>
	I0730 00:06:05.033979  503585 main.go:141] libmachine: (addons-091578)     <disk type='file' device='cdrom'>
	I0730 00:06:05.033996  503585 main.go:141] libmachine: (addons-091578)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/boot2docker.iso'/>
	I0730 00:06:05.034009  503585 main.go:141] libmachine: (addons-091578)       <target dev='hdc' bus='scsi'/>
	I0730 00:06:05.034020  503585 main.go:141] libmachine: (addons-091578)       <readonly/>
	I0730 00:06:05.034032  503585 main.go:141] libmachine: (addons-091578)     </disk>
	I0730 00:06:05.034045  503585 main.go:141] libmachine: (addons-091578)     <disk type='file' device='disk'>
	I0730 00:06:05.034057  503585 main.go:141] libmachine: (addons-091578)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:06:05.034071  503585 main.go:141] libmachine: (addons-091578)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/addons-091578.rawdisk'/>
	I0730 00:06:05.034084  503585 main.go:141] libmachine: (addons-091578)       <target dev='hda' bus='virtio'/>
	I0730 00:06:05.034096  503585 main.go:141] libmachine: (addons-091578)     </disk>
	I0730 00:06:05.034110  503585 main.go:141] libmachine: (addons-091578)     <interface type='network'>
	I0730 00:06:05.034123  503585 main.go:141] libmachine: (addons-091578)       <source network='mk-addons-091578'/>
	I0730 00:06:05.034133  503585 main.go:141] libmachine: (addons-091578)       <model type='virtio'/>
	I0730 00:06:05.034144  503585 main.go:141] libmachine: (addons-091578)     </interface>
	I0730 00:06:05.034154  503585 main.go:141] libmachine: (addons-091578)     <interface type='network'>
	I0730 00:06:05.034166  503585 main.go:141] libmachine: (addons-091578)       <source network='default'/>
	I0730 00:06:05.034183  503585 main.go:141] libmachine: (addons-091578)       <model type='virtio'/>
	I0730 00:06:05.034196  503585 main.go:141] libmachine: (addons-091578)     </interface>
	I0730 00:06:05.034206  503585 main.go:141] libmachine: (addons-091578)     <serial type='pty'>
	I0730 00:06:05.034218  503585 main.go:141] libmachine: (addons-091578)       <target port='0'/>
	I0730 00:06:05.034227  503585 main.go:141] libmachine: (addons-091578)     </serial>
	I0730 00:06:05.034239  503585 main.go:141] libmachine: (addons-091578)     <console type='pty'>
	I0730 00:06:05.034254  503585 main.go:141] libmachine: (addons-091578)       <target type='serial' port='0'/>
	I0730 00:06:05.034265  503585 main.go:141] libmachine: (addons-091578)     </console>
	I0730 00:06:05.034276  503585 main.go:141] libmachine: (addons-091578)     <rng model='virtio'>
	I0730 00:06:05.034288  503585 main.go:141] libmachine: (addons-091578)       <backend model='random'>/dev/random</backend>
	I0730 00:06:05.034298  503585 main.go:141] libmachine: (addons-091578)     </rng>
	I0730 00:06:05.034315  503585 main.go:141] libmachine: (addons-091578)     
	I0730 00:06:05.034329  503585 main.go:141] libmachine: (addons-091578)     
	I0730 00:06:05.034340  503585 main.go:141] libmachine: (addons-091578)   </devices>
	I0730 00:06:05.034350  503585 main.go:141] libmachine: (addons-091578) </domain>
	I0730 00:06:05.034362  503585 main.go:141] libmachine: (addons-091578) 
	I0730 00:06:05.040130  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:39:46:41 in network default
	I0730 00:06:05.040662  503585 main.go:141] libmachine: (addons-091578) Ensuring networks are active...
	I0730 00:06:05.040683  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:05.041364  503585 main.go:141] libmachine: (addons-091578) Ensuring network default is active
	I0730 00:06:05.041696  503585 main.go:141] libmachine: (addons-091578) Ensuring network mk-addons-091578 is active
	I0730 00:06:05.042243  503585 main.go:141] libmachine: (addons-091578) Getting domain xml...
	I0730 00:06:05.042987  503585 main.go:141] libmachine: (addons-091578) Creating domain...
	I0730 00:06:06.436500  503585 main.go:141] libmachine: (addons-091578) Waiting to get IP...
	I0730 00:06:06.437312  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:06.437641  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:06.437698  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:06.437644  503607 retry.go:31] will retry after 227.017258ms: waiting for machine to come up
	I0730 00:06:06.666137  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:06.666655  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:06.666681  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:06.666605  503607 retry.go:31] will retry after 301.899156ms: waiting for machine to come up
	I0730 00:06:06.970087  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:06.970598  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:06.970629  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:06.970557  503607 retry.go:31] will retry after 460.750332ms: waiting for machine to come up
	I0730 00:06:07.433374  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:07.433754  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:07.433786  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:07.433734  503607 retry.go:31] will retry after 569.719068ms: waiting for machine to come up
	I0730 00:06:08.005647  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:08.005975  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:08.006000  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:08.005936  503607 retry.go:31] will retry after 581.777372ms: waiting for machine to come up
	I0730 00:06:08.589956  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:08.590436  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:08.590467  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:08.590380  503607 retry.go:31] will retry after 585.374235ms: waiting for machine to come up
	I0730 00:06:09.177619  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:09.178031  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:09.178051  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:09.177973  503607 retry.go:31] will retry after 766.103484ms: waiting for machine to come up
	I0730 00:06:09.945937  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:09.946347  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:09.946380  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:09.946295  503607 retry.go:31] will retry after 1.332810558s: waiting for machine to come up
	I0730 00:06:11.280861  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:11.281331  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:11.281381  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:11.281251  503607 retry.go:31] will retry after 1.162526253s: waiting for machine to come up
	I0730 00:06:12.445756  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:12.446085  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:12.446107  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:12.446057  503607 retry.go:31] will retry after 1.459502082s: waiting for machine to come up
	I0730 00:06:13.907851  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:13.908304  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:13.908335  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:13.908241  503607 retry.go:31] will retry after 2.725816137s: waiting for machine to come up
	I0730 00:06:16.637526  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:16.637961  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:16.637986  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:16.637933  503607 retry.go:31] will retry after 3.042906213s: waiting for machine to come up
	I0730 00:06:19.682038  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:19.682445  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:19.682478  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:19.682388  503607 retry.go:31] will retry after 3.206453248s: waiting for machine to come up
	I0730 00:06:22.892793  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:22.893130  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find current IP address of domain addons-091578 in network mk-addons-091578
	I0730 00:06:22.893157  503585 main.go:141] libmachine: (addons-091578) DBG | I0730 00:06:22.893071  503607 retry.go:31] will retry after 5.096569464s: waiting for machine to come up
	I0730 00:06:27.990936  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:27.991396  503585 main.go:141] libmachine: (addons-091578) Found IP for machine: 192.168.39.214
	I0730 00:06:27.991424  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has current primary IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:27.991431  503585 main.go:141] libmachine: (addons-091578) Reserving static IP address...
	I0730 00:06:27.991852  503585 main.go:141] libmachine: (addons-091578) DBG | unable to find host DHCP lease matching {name: "addons-091578", mac: "52:54:00:f9:5f:c4", ip: "192.168.39.214"} in network mk-addons-091578
	I0730 00:06:28.066468  503585 main.go:141] libmachine: (addons-091578) DBG | Getting to WaitForSSH function...
	I0730 00:06:28.066499  503585 main.go:141] libmachine: (addons-091578) Reserved static IP address: 192.168.39.214
	I0730 00:06:28.066515  503585 main.go:141] libmachine: (addons-091578) Waiting for SSH to be available...
	I0730 00:06:28.068893  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.069376  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.069407  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.069509  503585 main.go:141] libmachine: (addons-091578) DBG | Using SSH client type: external
	I0730 00:06:28.069530  503585 main.go:141] libmachine: (addons-091578) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa (-rw-------)
	I0730 00:06:28.069589  503585 main.go:141] libmachine: (addons-091578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:06:28.069617  503585 main.go:141] libmachine: (addons-091578) DBG | About to run SSH command:
	I0730 00:06:28.069633  503585 main.go:141] libmachine: (addons-091578) DBG | exit 0
	I0730 00:06:28.192859  503585 main.go:141] libmachine: (addons-091578) DBG | SSH cmd err, output: <nil>: 
	I0730 00:06:28.193135  503585 main.go:141] libmachine: (addons-091578) KVM machine creation complete!
	I0730 00:06:28.193505  503585 main.go:141] libmachine: (addons-091578) Calling .GetConfigRaw
	I0730 00:06:28.194248  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:28.194457  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:28.194643  503585 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:06:28.194659  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:28.195898  503585 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:06:28.195916  503585 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:06:28.195924  503585 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:06:28.195933  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.198027  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.198411  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.198434  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.198583  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.198768  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.198900  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.199016  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.199181  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.199443  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.199455  503585 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:06:28.299976  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:06:28.300002  503585 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:06:28.300013  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.302905  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.303414  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.303446  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.303628  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.303843  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.304014  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.304178  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.304333  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.304507  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.304517  503585 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:06:28.405331  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:06:28.405411  503585 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:06:28.405419  503585 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:06:28.405428  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:28.405738  503585 buildroot.go:166] provisioning hostname "addons-091578"
	I0730 00:06:28.405776  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:28.406024  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.408913  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.409647  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.410036  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.410314  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.410510  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.410671  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.410805  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.411000  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.411182  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.411196  503585 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-091578 && echo "addons-091578" | sudo tee /etc/hostname
	I0730 00:06:28.525849  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-091578
	
	I0730 00:06:28.525877  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.528740  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.529021  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.529052  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.529217  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.529428  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.529631  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.529787  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.529964  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:28.530204  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:28.530230  503585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-091578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-091578/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-091578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:06:28.636658  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:06:28.636691  503585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:06:28.636732  503585 buildroot.go:174] setting up certificates
	I0730 00:06:28.636759  503585 provision.go:84] configureAuth start
	I0730 00:06:28.636775  503585 main.go:141] libmachine: (addons-091578) Calling .GetMachineName
	I0730 00:06:28.637096  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:28.639919  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.640232  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.640254  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.640386  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.642677  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.643167  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.643188  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.643331  503585 provision.go:143] copyHostCerts
	I0730 00:06:28.643438  503585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:06:28.643556  503585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:06:28.643623  503585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:06:28.643671  503585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.addons-091578 san=[127.0.0.1 192.168.39.214 addons-091578 localhost minikube]
	I0730 00:06:28.865726  503585 provision.go:177] copyRemoteCerts
	I0730 00:06:28.865802  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:06:28.865830  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:28.869004  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.869295  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:28.869328  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:28.869460  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:28.869676  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:28.869842  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:28.869975  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:28.951040  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:06:28.975965  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 00:06:29.000305  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:06:29.024340  503585 provision.go:87] duration metric: took 387.555523ms to configureAuth
	I0730 00:06:29.024372  503585 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:06:29.024590  503585 config.go:182] Loaded profile config "addons-091578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:06:29.024724  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.027324  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.027632  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.027659  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.027776  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.028011  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.028167  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.028336  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.028502  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:29.028669  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:29.028682  503585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:06:29.275401  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:06:29.275432  503585 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:06:29.275442  503585 main.go:141] libmachine: (addons-091578) Calling .GetURL
	I0730 00:06:29.276678  503585 main.go:141] libmachine: (addons-091578) DBG | Using libvirt version 6000000
	I0730 00:06:29.278812  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.279186  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.279242  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.279266  503585 main.go:141] libmachine: Docker is up and running!
	I0730 00:06:29.279285  503585 main.go:141] libmachine: Reticulating splines...
	I0730 00:06:29.279293  503585 client.go:171] duration metric: took 25.121799468s to LocalClient.Create
	I0730 00:06:29.279319  503585 start.go:167] duration metric: took 25.121864048s to libmachine.API.Create "addons-091578"
	I0730 00:06:29.279330  503585 start.go:293] postStartSetup for "addons-091578" (driver="kvm2")
	I0730 00:06:29.279340  503585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:06:29.279358  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.279620  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:06:29.279645  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.281915  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.282188  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.282217  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.282435  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.282716  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.282933  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.283068  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:29.362884  503585 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:06:29.366734  503585 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:06:29.366766  503585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:06:29.366864  503585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:06:29.366898  503585 start.go:296] duration metric: took 87.56036ms for postStartSetup
	I0730 00:06:29.366956  503585 main.go:141] libmachine: (addons-091578) Calling .GetConfigRaw
	I0730 00:06:29.367636  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:29.370387  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.370725  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.370757  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.370973  503585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/config.json ...
	I0730 00:06:29.371166  503585 start.go:128] duration metric: took 25.232469033s to createHost
	I0730 00:06:29.371193  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.373627  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.373955  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.373976  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.374128  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.374322  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.374509  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.374642  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.374836  503585 main.go:141] libmachine: Using SSH client type: native
	I0730 00:06:29.375017  503585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0730 00:06:29.375029  503585 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:06:29.481225  503585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722297989.460630817
	
	I0730 00:06:29.481255  503585 fix.go:216] guest clock: 1722297989.460630817
	I0730 00:06:29.481267  503585 fix.go:229] Guest: 2024-07-30 00:06:29.460630817 +0000 UTC Remote: 2024-07-30 00:06:29.371178586 +0000 UTC m=+25.339019431 (delta=89.452231ms)
	I0730 00:06:29.481300  503585 fix.go:200] guest clock delta is within tolerance: 89.452231ms
	I0730 00:06:29.481306  503585 start.go:83] releasing machines lock for "addons-091578", held for 25.342740042s
	I0730 00:06:29.481331  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.481668  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:29.484292  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.484691  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.484739  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.484816  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.485351  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.485544  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:29.485647  503585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:06:29.485696  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.485804  503585 ssh_runner.go:195] Run: cat /version.json
	I0730 00:06:29.485821  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:29.488282  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.488476  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.488607  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.488635  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.488826  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:29.488840  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.488848  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:29.489010  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:29.489111  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.489207  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:29.489273  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.489353  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:29.489416  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:29.489465  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:29.561348  503585 ssh_runner.go:195] Run: systemctl --version
	I0730 00:06:29.595588  503585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:06:29.752489  503585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:06:29.758186  503585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:06:29.758272  503585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:06:29.774299  503585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:06:29.774331  503585 start.go:495] detecting cgroup driver to use...
	I0730 00:06:29.774408  503585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:06:29.790689  503585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:06:29.804473  503585 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:06:29.804541  503585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:06:29.817576  503585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:06:29.831051  503585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:06:29.938437  503585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:06:30.089799  503585 docker.go:233] disabling docker service ...
	I0730 00:06:30.089890  503585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:06:30.103656  503585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:06:30.115831  503585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:06:30.237644  503585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:06:30.354697  503585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:06:30.367512  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:06:30.384785  503585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:06:30.384847  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.394460  503585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:06:30.394528  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.404052  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.413692  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.423316  503585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:06:30.433261  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.443231  503585 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.459823  503585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:06:30.469807  503585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:06:30.478807  503585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:06:30.478880  503585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:06:30.492287  503585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:06:30.501783  503585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:06:30.616317  503585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:06:30.742290  503585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:06:30.742385  503585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:06:30.746811  503585 start.go:563] Will wait 60s for crictl version
	I0730 00:06:30.746886  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:06:30.750374  503585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:06:30.787626  503585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:06:30.787762  503585 ssh_runner.go:195] Run: crio --version
	I0730 00:06:30.813701  503585 ssh_runner.go:195] Run: crio --version
	I0730 00:06:30.841999  503585 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:06:30.843422  503585 main.go:141] libmachine: (addons-091578) Calling .GetIP
	I0730 00:06:30.846100  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:30.846448  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:30.846478  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:30.846673  503585 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:06:30.850909  503585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:06:30.862451  503585 kubeadm.go:883] updating cluster {Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:06:30.862593  503585 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:06:30.862657  503585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:06:30.891616  503585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0730 00:06:30.891689  503585 ssh_runner.go:195] Run: which lz4
	I0730 00:06:30.895286  503585 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0730 00:06:30.899173  503585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0730 00:06:30.899206  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0730 00:06:32.017133  503585 crio.go:462] duration metric: took 1.121886601s to copy over tarball
	I0730 00:06:32.017222  503585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0730 00:06:34.221238  503585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.203977277s)
	I0730 00:06:34.221273  503585 crio.go:469] duration metric: took 2.20410772s to extract the tarball
	I0730 00:06:34.221285  503585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0730 00:06:34.258279  503585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:06:34.298516  503585 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:06:34.298543  503585 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:06:34.298552  503585 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.30.3 crio true true} ...
	I0730 00:06:34.298694  503585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-091578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:06:34.298763  503585 ssh_runner.go:195] Run: crio config
	I0730 00:06:34.341041  503585 cni.go:84] Creating CNI manager for ""
	I0730 00:06:34.341069  503585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:06:34.341087  503585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:06:34.341117  503585 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-091578 NodeName:addons-091578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:06:34.341290  503585 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-091578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:06:34.341369  503585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:06:34.350448  503585 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:06:34.350531  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 00:06:34.359125  503585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0730 00:06:34.375453  503585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:06:34.391743  503585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0730 00:06:34.408633  503585 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0730 00:06:34.412323  503585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:06:34.425133  503585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:06:34.543503  503585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:06:34.559696  503585 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578 for IP: 192.168.39.214
	I0730 00:06:34.559729  503585 certs.go:194] generating shared ca certs ...
	I0730 00:06:34.559753  503585 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.559942  503585 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:06:34.777287  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt ...
	I0730 00:06:34.777321  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt: {Name:mkb7ea0bad21ae509edda96159e2c7ea1e30c6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.777534  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key ...
	I0730 00:06:34.777553  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key: {Name:mk4e96af191191f480b46c042f1e27b6aeadd365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.777667  503585 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:06:34.996212  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt ...
	I0730 00:06:34.996245  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt: {Name:mk139a030973db209f8ffe3406c971813e95e901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.996422  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key ...
	I0730 00:06:34.996434  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key: {Name:mkc555616fa7470fab21853628568988b93ea51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:34.996504  503585 certs.go:256] generating profile certs ...
	I0730 00:06:34.996568  503585 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.key
	I0730 00:06:34.996582  503585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt with IP's: []
	I0730 00:06:35.240339  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt ...
	I0730 00:06:35.240373  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: {Name:mk9185df29d5fb509b2c24a719fe223587ce7578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.240551  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.key ...
	I0730 00:06:35.240562  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.key: {Name:mk44e6c060866c5d708c17c60140d362e29beee9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.240633  503585 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271
	I0730 00:06:35.240650  503585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I0730 00:06:35.485444  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271 ...
	I0730 00:06:35.485478  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271: {Name:mk2e174214ad821c70c65f7506c7e1bcfa80282d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.485667  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271 ...
	I0730 00:06:35.485690  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271: {Name:mk8edb370e6bc7cb67eb48b97217b15577bb8eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.485795  503585 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt.37bc2271 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt
	I0730 00:06:35.485902  503585 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key.37bc2271 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key
	I0730 00:06:35.485969  503585 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key
	I0730 00:06:35.485995  503585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt with IP's: []
	I0730 00:06:35.626274  503585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt ...
	I0730 00:06:35.626305  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt: {Name:mk36be54f25383cab0071dd0bffb7bb3c83d494d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.626499  503585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key ...
	I0730 00:06:35.626522  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key: {Name:mkeaff3af3d6f1e9defee6cc86036e50dd4f2e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:35.626737  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:06:35.626782  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:06:35.626822  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:06:35.626851  503585 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:06:35.627517  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:06:35.651310  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:06:35.673486  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:06:35.702911  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:06:35.725540  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0730 00:06:35.748100  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:06:35.770433  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:06:35.797234  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:06:35.819464  503585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:06:35.842174  503585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:06:35.857908  503585 ssh_runner.go:195] Run: openssl version
	I0730 00:06:35.863488  503585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:06:35.873865  503585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:06:35.878111  503585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:06:35.878172  503585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:06:35.883866  503585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:06:35.894221  503585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:06:35.898198  503585 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:06:35.898275  503585 kubeadm.go:392] StartCluster: {Name:addons-091578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-091578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:06:35.898354  503585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:06:35.898400  503585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:06:35.933302  503585 cri.go:89] found id: ""
	I0730 00:06:35.933385  503585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 00:06:35.942952  503585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 00:06:35.952117  503585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 00:06:35.961020  503585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 00:06:35.961066  503585 kubeadm.go:157] found existing configuration files:
	
	I0730 00:06:35.961115  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 00:06:35.970201  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 00:06:35.970266  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 00:06:35.979169  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 00:06:35.987909  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 00:06:35.987976  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 00:06:35.997111  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 00:06:36.006365  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 00:06:36.006436  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 00:06:36.015400  503585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 00:06:36.024005  503585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 00:06:36.024067  503585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 00:06:36.033057  503585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0730 00:06:36.087367  503585 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0730 00:06:36.087449  503585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 00:06:36.211093  503585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 00:06:36.211247  503585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 00:06:36.211394  503585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 00:06:36.426383  503585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 00:06:36.517863  503585 out.go:204]   - Generating certificates and keys ...
	I0730 00:06:36.517997  503585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 00:06:36.518109  503585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 00:06:36.588207  503585 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 00:06:36.674406  503585 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 00:06:36.733368  503585 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 00:06:36.824132  503585 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 00:06:37.052552  503585 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 00:06:37.052771  503585 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-091578 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0730 00:06:37.339834  503585 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 00:06:37.340047  503585 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-091578 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0730 00:06:37.474138  503585 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 00:06:37.566852  503585 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 00:06:37.689891  503585 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 00:06:37.690126  503585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 00:06:37.912398  503585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 00:06:38.088617  503585 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0730 00:06:38.149969  503585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 00:06:38.368471  503585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 00:06:38.532698  503585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 00:06:38.533473  503585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 00:06:38.537497  503585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 00:06:38.596568  503585 out.go:204]   - Booting up control plane ...
	I0730 00:06:38.596738  503585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 00:06:38.596861  503585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 00:06:38.596975  503585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 00:06:38.597134  503585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 00:06:38.597251  503585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 00:06:38.597320  503585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 00:06:38.675186  503585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0730 00:06:38.675286  503585 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0730 00:06:39.176987  503585 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.110655ms
	I0730 00:06:39.177126  503585 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0730 00:06:43.676985  503585 kubeadm.go:310] [api-check] The API server is healthy after 4.5018289s
	I0730 00:06:43.694821  503585 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0730 00:06:43.706043  503585 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0730 00:06:43.729901  503585 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0730 00:06:43.730195  503585 kubeadm.go:310] [mark-control-plane] Marking the node addons-091578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0730 00:06:43.743000  503585 kubeadm.go:310] [bootstrap-token] Using token: 4lszgu.k109gvlsncythwao
	I0730 00:06:43.744466  503585 out.go:204]   - Configuring RBAC rules ...
	I0730 00:06:43.744617  503585 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0730 00:06:43.751633  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0730 00:06:43.758261  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0730 00:06:43.761401  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0730 00:06:43.765026  503585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0730 00:06:43.768427  503585 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0730 00:06:44.085319  503585 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0730 00:06:44.508125  503585 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0730 00:06:45.085194  503585 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0730 00:06:45.086673  503585 kubeadm.go:310] 
	I0730 00:06:45.086755  503585 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0730 00:06:45.086773  503585 kubeadm.go:310] 
	I0730 00:06:45.086848  503585 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0730 00:06:45.086857  503585 kubeadm.go:310] 
	I0730 00:06:45.086913  503585 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0730 00:06:45.087012  503585 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0730 00:06:45.087088  503585 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0730 00:06:45.087099  503585 kubeadm.go:310] 
	I0730 00:06:45.087174  503585 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0730 00:06:45.087186  503585 kubeadm.go:310] 
	I0730 00:06:45.087245  503585 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0730 00:06:45.087259  503585 kubeadm.go:310] 
	I0730 00:06:45.087328  503585 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0730 00:06:45.087430  503585 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0730 00:06:45.087539  503585 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0730 00:06:45.087549  503585 kubeadm.go:310] 
	I0730 00:06:45.087672  503585 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0730 00:06:45.087760  503585 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0730 00:06:45.087772  503585 kubeadm.go:310] 
	I0730 00:06:45.087869  503585 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4lszgu.k109gvlsncythwao \
	I0730 00:06:45.087953  503585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 \
	I0730 00:06:45.087972  503585 kubeadm.go:310] 	--control-plane 
	I0730 00:06:45.087979  503585 kubeadm.go:310] 
	I0730 00:06:45.088051  503585 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0730 00:06:45.088058  503585 kubeadm.go:310] 
	I0730 00:06:45.088130  503585 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4lszgu.k109gvlsncythwao \
	I0730 00:06:45.088209  503585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 
	I0730 00:06:45.089092  503585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 00:06:45.089165  503585 cni.go:84] Creating CNI manager for ""
	I0730 00:06:45.089182  503585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:06:45.091035  503585 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0730 00:06:45.092349  503585 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0730 00:06:45.102258  503585 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0730 00:06:45.119378  503585 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0730 00:06:45.119449  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:45.119513  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-091578 minikube.k8s.io/updated_at=2024_07_30T00_06_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=addons-091578 minikube.k8s.io/primary=true
	I0730 00:06:45.142329  503585 ops.go:34] apiserver oom_adj: -16
	I0730 00:06:45.246478  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:45.747518  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:46.247455  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:46.747523  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:47.247299  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:47.746862  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:48.246905  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:48.746983  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:49.246838  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:49.747341  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:50.247276  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:50.746745  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:51.246810  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:51.747325  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:52.246946  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:52.747198  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:53.246668  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:53.746601  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:54.247274  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:54.747485  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:55.247492  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:55.747313  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:56.247138  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:56.746514  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:57.247169  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:57.746516  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:58.246570  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:58.747086  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:59.247212  503585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:06:59.339609  503585 kubeadm.go:1113] duration metric: took 14.220221794s to wait for elevateKubeSystemPrivileges
	I0730 00:06:59.339661  503585 kubeadm.go:394] duration metric: took 23.441392171s to StartCluster
	I0730 00:06:59.339693  503585 settings.go:142] acquiring lock: {Name:mk89b2537c1ec20302d90ab73b167422bb363b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:59.339860  503585 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:06:59.340499  503585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/kubeconfig: {Name:mk6ecf4e5b07b810f1fa2b9790857d7586f0cf41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:06:59.340753  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0730 00:06:59.340790  503585 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:06:59.340875  503585 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0730 00:06:59.340975  503585 addons.go:69] Setting yakd=true in profile "addons-091578"
	I0730 00:06:59.340996  503585 addons.go:69] Setting default-storageclass=true in profile "addons-091578"
	I0730 00:06:59.341015  503585 addons.go:69] Setting helm-tiller=true in profile "addons-091578"
	I0730 00:06:59.341009  503585 addons.go:69] Setting cloud-spanner=true in profile "addons-091578"
	I0730 00:06:59.341029  503585 addons.go:69] Setting storage-provisioner=true in profile "addons-091578"
	I0730 00:06:59.341038  503585 addons.go:69] Setting volcano=true in profile "addons-091578"
	I0730 00:06:59.341040  503585 addons.go:234] Setting addon helm-tiller=true in "addons-091578"
	I0730 00:06:59.341046  503585 config.go:182] Loaded profile config "addons-091578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:06:59.341057  503585 addons.go:234] Setting addon volcano=true in "addons-091578"
	I0730 00:06:59.341059  503585 addons.go:69] Setting inspektor-gadget=true in profile "addons-091578"
	I0730 00:06:59.341063  503585 addons.go:69] Setting ingress=true in profile "addons-091578"
	I0730 00:06:59.341076  503585 addons.go:234] Setting addon cloud-spanner=true in "addons-091578"
	I0730 00:06:59.341083  503585 addons.go:234] Setting addon ingress=true in "addons-091578"
	I0730 00:06:59.341095  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341096  503585 addons.go:69] Setting volumesnapshots=true in profile "addons-091578"
	I0730 00:06:59.341101  503585 addons.go:69] Setting metrics-server=true in profile "addons-091578"
	I0730 00:06:59.341009  503585 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-091578"
	I0730 00:06:59.341118  503585 addons.go:234] Setting addon metrics-server=true in "addons-091578"
	I0730 00:06:59.341120  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341136  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341140  503585 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-091578"
	I0730 00:06:59.341172  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341095  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341021  503585 addons.go:69] Setting registry=true in profile "addons-091578"
	I0730 00:06:59.341601  503585 addons.go:234] Setting addon registry=true in "addons-091578"
	I0730 00:06:59.341664  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341030  503585 addons.go:234] Setting addon yakd=true in "addons-091578"
	I0730 00:06:59.341077  503585 addons.go:234] Setting addon inspektor-gadget=true in "addons-091578"
	I0730 00:06:59.341943  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.342140  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342176  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342206  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342227  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342245  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342248  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342273  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342281  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.342356  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342380  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.340985  503585 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-091578"
	I0730 00:06:59.342782  503585 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-091578"
	I0730 00:06:59.342814  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.342821  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.342854  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.343214  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.343242  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.343373  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.343424  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.343878  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.344462  503585 out.go:177] * Verifying Kubernetes components...
	I0730 00:06:59.344424  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.341012  503585 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-091578"
	I0730 00:06:59.340979  503585 addons.go:69] Setting gcp-auth=true in profile "addons-091578"
	I0730 00:06:59.341051  503585 addons.go:234] Setting addon storage-provisioner=true in "addons-091578"
	I0730 00:06:59.341048  503585 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-091578"
	I0730 00:06:59.341059  503585 addons.go:69] Setting ingress-dns=true in profile "addons-091578"
	I0730 00:06:59.341110  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.341115  503585 addons.go:234] Setting addon volumesnapshots=true in "addons-091578"
	I0730 00:06:59.344801  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.345153  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.345244  503585 addons.go:234] Setting addon ingress-dns=true in "addons-091578"
	I0730 00:06:59.345355  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.345411  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.345461  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.345604  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.345658  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.346159  503585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:06:59.346652  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.346683  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.346716  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.346776  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.346836  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.346881  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.349702  503585 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-091578"
	I0730 00:06:59.349810  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.350244  503585 mustload.go:65] Loading cluster: addons-091578
	I0730 00:06:59.364093  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0730 00:06:59.364283  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0730 00:06:59.364761  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.365038  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.365322  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.365347  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.365431  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0730 00:06:59.365726  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.365898  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.366450  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.366514  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.366873  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.366894  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.366913  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I0730 00:06:59.367358  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.367432  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.367447  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.367454  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.367962  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.368156  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.368208  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.368486  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.368508  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.369060  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.369100  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.369590  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.370350  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.370395  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.380518  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I0730 00:06:59.381170  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.382363  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
	I0730 00:06:59.382970  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.383925  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.383950  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.384057  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0730 00:06:59.384394  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.384479  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.385572  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.385592  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.385933  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.385975  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.386650  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.386699  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.387125  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.387166  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.388109  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.388152  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.389207  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.389266  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.389450  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0730 00:06:59.397007  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0730 00:06:59.397105  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0730 00:06:59.397300  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0730 00:06:59.397475  503585 config.go:182] Loaded profile config "addons-091578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:06:59.397990  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.398137  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.398158  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.398172  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.398240  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.398267  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.399031  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.399160  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.399364  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.399378  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.399586  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.399600  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.399677  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39837
	I0730 00:06:59.399961  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.399975  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.400076  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.400170  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.400825  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.401095  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.401130  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.401204  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.401890  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.401942  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.402177  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.402553  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.404577  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.406813  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0730 00:06:59.407265  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.408221  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.408245  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.408635  503585 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0730 00:06:59.408682  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.408736  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.408958  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.409030  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.409273  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.409802  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0730 00:06:59.409823  503585 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0730 00:06:59.409847  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.413396  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.413415  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.415053  503585 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0730 00:06:59.415222  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.415316  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.415333  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.415594  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.415797  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.416024  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.416295  503585 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0730 00:06:59.416313  503585 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0730 00:06:59.416334  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.417013  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0730 00:06:59.418422  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0730 00:06:59.419440  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.419500  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0730 00:06:59.419752  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.419774  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.420050  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.420250  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.420408  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.420564  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.422203  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0730 00:06:59.423510  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0730 00:06:59.424927  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0730 00:06:59.425921  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0730 00:06:59.426034  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0730 00:06:59.426370  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.426980  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.427000  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.427197  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0730 00:06:59.427215  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0730 00:06:59.427237  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.427341  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.427515  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.430777  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.430793  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.431066  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0730 00:06:59.431264  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.431288  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.431407  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.431578  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.431757  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.431923  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.432420  503585 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0730 00:06:59.433121  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.433593  503585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0730 00:06:59.433613  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0730 00:06:59.433632  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.433712  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.433731  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.436787  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.437233  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.437261  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.437505  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.437706  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.437899  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.437924  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.438099  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.438104  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.439779  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.440058  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:06:59.440072  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:06:59.440312  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:06:59.440342  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:06:59.440357  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:06:59.440366  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:06:59.440374  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:06:59.440544  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:06:59.440559  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:06:59.440568  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	W0730 00:06:59.440675  503585 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0730 00:06:59.441171  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0730 00:06:59.442523  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.443227  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.443250  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.443621  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0730 00:06:59.443977  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44855
	I0730 00:06:59.444293  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.444790  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.444803  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.444858  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.445165  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.445741  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.445761  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.446490  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.447006  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.447032  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.447531  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.447766  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.449644  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.450967  503585 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-091578"
	I0730 00:06:59.451013  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.451389  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.451440  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.451689  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.453607  503585 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0730 00:06:59.454250  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0730 00:06:59.454258  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0730 00:06:59.454710  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.455075  503585 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 00:06:59.455095  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0730 00:06:59.455115  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.455219  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.455238  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.455398  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0730 00:06:59.455625  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.455634  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I0730 00:06:59.455776  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.455951  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.456050  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.456362  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.456367  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.456379  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.456384  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.456507  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.456518  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.456641  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.456657  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.456851  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.456884  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.457081  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.457152  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.457625  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.457691  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.458809  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.459200  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.459239  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.459346  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0730 00:06:59.459489  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.459760  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.460220  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.460247  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.460550  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.460847  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.460898  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.461218  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.461404  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.461444  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.461578  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.461766  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.461930  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.462095  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.463056  503585 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0730 00:06:59.463397  503585 addons.go:234] Setting addon default-storageclass=true in "addons-091578"
	I0730 00:06:59.463440  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:06:59.463762  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.463793  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.464051  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0730 00:06:59.464464  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.465004  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.465023  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.465365  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0730 00:06:59.465532  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.465926  503585 out.go:177]   - Using image docker.io/registry:2.8.3
	I0730 00:06:59.466073  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.466109  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.466708  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.467302  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.467319  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.467342  503585 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0730 00:06:59.467358  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0730 00:06:59.467377  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.469355  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.469766  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.471361  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.471971  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.472009  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.472222  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.472407  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.472578  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.472754  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.477796  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.479887  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0730 00:06:59.479934  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34277
	I0730 00:06:59.480394  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.480547  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0730 00:06:59.481142  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.481161  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.481536  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.481606  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.482191  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.482236  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.482468  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 00:06:59.482912  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.482930  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.483521  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.483980  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.484468  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 00:06:59.485531  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0730 00:06:59.485729  503585 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 00:06:59.485750  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0730 00:06:59.485772  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.485732  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0730 00:06:59.486008  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.486235  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.486598  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.486618  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.486725  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.486747  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.486927  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.487039  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.487111  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.487385  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.487450  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.489366  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.489710  503585 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0730 00:06:59.490148  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.490670  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.490709  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.490676  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0730 00:06:59.490908  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0730 00:06:59.490927  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.490934  503585 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0730 00:06:59.490953  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.490978  503585 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0730 00:06:59.491091  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.491190  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.491326  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.491482  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.491496  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.491636  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.492093  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0730 00:06:59.492107  503585 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0730 00:06:59.492113  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.492122  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.492292  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.494039  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.495580  503585 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0730 00:06:59.495951  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496070  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496431  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.496527  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496790  503585 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0730 00:06:59.496805  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0730 00:06:59.496820  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.496836  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.496855  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.496887  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.497119  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.497120  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.497551  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.497945  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.498192  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.498244  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.498395  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.499376  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.499710  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.499730  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.499870  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.500047  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.500210  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.500388  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.502691  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I0730 00:06:59.503123  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.503568  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.503592  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.503958  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.504161  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.505946  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.507983  503585 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0730 00:06:59.509242  503585 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 00:06:59.509263  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0730 00:06:59.509286  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.510661  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
	I0730 00:06:59.511264  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.511889  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.511908  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.512442  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.513087  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:06:59.513130  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:06:59.513387  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.517381  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.517413  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.517524  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.517704  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.517852  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.518000  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	W0730 00:06:59.521243  503585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56976->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.521275  503585 retry.go:31] will retry after 192.776047ms: ssh: handshake failed: read tcp 192.168.39.1:56976->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.521913  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0730 00:06:59.522380  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.522585  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0730 00:06:59.522850  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.522877  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.522940  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.523257  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.523446  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.523887  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.523905  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.524523  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.524810  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:06:59.525589  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.527889  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.528191  503585 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 00:06:59.529848  503585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:06:59.529867  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0730 00:06:59.529885  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.529961  503585 out.go:177]   - Using image docker.io/busybox:stable
	I0730 00:06:59.531096  503585 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0730 00:06:59.532555  503585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 00:06:59.532573  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0730 00:06:59.532592  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.533273  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.533746  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.533769  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.533933  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.534172  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.534315  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.534472  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.535688  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	W0730 00:06:59.535873  503585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0730 00:06:59.535894  503585 retry.go:31] will retry after 225.32749ms: ssh: handshake failed: EOF
	I0730 00:06:59.536093  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.536121  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.536305  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.536468  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.536570  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.536638  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0730 00:06:59.536820  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.537093  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:06:59.537634  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:06:59.537653  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:06:59.537968  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:06:59.538165  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	W0730 00:06:59.539314  503585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56992->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.539350  503585 retry.go:31] will retry after 343.324768ms: ssh: handshake failed: read tcp 192.168.39.1:56992->192.168.39.214:22: read: connection reset by peer
	I0730 00:06:59.539582  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:06:59.539814  503585 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0730 00:06:59.539828  503585 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0730 00:06:59.539846  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:06:59.542830  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.543224  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:06:59.543245  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:06:59.543383  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:06:59.543543  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:06:59.543670  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:06:59.543797  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:06:59.843337  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0730 00:06:59.843365  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0730 00:06:59.913024  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0730 00:06:59.913062  503585 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0730 00:06:59.928765  503585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0730 00:06:59.928806  503585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0730 00:06:59.932177  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 00:06:59.970105  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 00:06:59.972228  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0730 00:06:59.974920  503585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0730 00:06:59.974944  503585 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0730 00:06:59.983115  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0730 00:06:59.983137  503585 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0730 00:06:59.991932  503585 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0730 00:06:59.991960  503585 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0730 00:07:00.027140  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 00:07:00.042042  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0730 00:07:00.042076  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0730 00:07:00.062815  503585 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0730 00:07:00.062847  503585 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0730 00:07:00.080521  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0730 00:07:00.102436  503585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 00:07:00.102469  503585 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0730 00:07:00.105326  503585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:07:00.105526  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0730 00:07:00.138511  503585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0730 00:07:00.138535  503585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0730 00:07:00.150300  503585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0730 00:07:00.150326  503585 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0730 00:07:00.169210  503585 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0730 00:07:00.169300  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0730 00:07:00.177603  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0730 00:07:00.177628  503585 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0730 00:07:00.191086  503585 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0730 00:07:00.191116  503585 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0730 00:07:00.207571  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0730 00:07:00.207606  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0730 00:07:00.323650  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 00:07:00.327681  503585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0730 00:07:00.327712  503585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0730 00:07:00.344053  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:07:00.344500  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0730 00:07:00.392580  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0730 00:07:00.411691  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0730 00:07:00.411725  503585 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0730 00:07:00.413533  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0730 00:07:00.413557  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0730 00:07:00.439634  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 00:07:00.469406  503585 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0730 00:07:00.469435  503585 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0730 00:07:00.486667  503585 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0730 00:07:00.486689  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0730 00:07:00.576154  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0730 00:07:00.576191  503585 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0730 00:07:00.645426  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0730 00:07:00.645468  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0730 00:07:00.722682  503585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0730 00:07:00.722722  503585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0730 00:07:00.728598  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0730 00:07:00.827435  503585 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 00:07:00.827461  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0730 00:07:00.925005  503585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0730 00:07:00.925033  503585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0730 00:07:00.925565  503585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0730 00:07:00.925599  503585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0730 00:07:01.063902  503585 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0730 00:07:01.063943  503585 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0730 00:07:01.066096  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 00:07:01.121144  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0730 00:07:01.121171  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0730 00:07:01.295864  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.363636237s)
	I0730 00:07:01.295936  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:01.295951  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:01.296310  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:01.296330  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:01.296330  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:01.296345  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:01.296355  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:01.296635  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:01.296651  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:01.305854  503585 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 00:07:01.305882  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0730 00:07:01.488469  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0730 00:07:01.488509  503585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0730 00:07:01.627165  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 00:07:01.754381  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0730 00:07:01.754424  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0730 00:07:02.031152  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0730 00:07:02.031185  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0730 00:07:02.392908  503585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 00:07:02.392943  503585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0730 00:07:02.746106  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 00:07:06.501255  503585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0730 00:07:06.501313  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:07:06.504790  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.505292  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:07:06.505323  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.505625  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:07:06.505855  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:07:06.506074  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:07:06.506256  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:07:06.829073  503585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0730 00:07:06.888096  503585 addons.go:234] Setting addon gcp-auth=true in "addons-091578"
	I0730 00:07:06.888162  503585 host.go:66] Checking if "addons-091578" exists ...
	I0730 00:07:06.888480  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:07:06.888517  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:07:06.904936  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36419
	I0730 00:07:06.905413  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:07:06.905993  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:07:06.906020  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:07:06.906407  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:07:06.906993  503585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:07:06.907039  503585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:07:06.924001  503585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0730 00:07:06.924474  503585 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:07:06.925035  503585 main.go:141] libmachine: Using API Version  1
	I0730 00:07:06.925063  503585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:07:06.925482  503585 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:07:06.925739  503585 main.go:141] libmachine: (addons-091578) Calling .GetState
	I0730 00:07:06.927706  503585 main.go:141] libmachine: (addons-091578) Calling .DriverName
	I0730 00:07:06.928001  503585 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0730 00:07:06.928027  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHHostname
	I0730 00:07:06.932180  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.932883  503585 main.go:141] libmachine: (addons-091578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:5f:c4", ip: ""} in network mk-addons-091578: {Iface:virbr1 ExpiryTime:2024-07-30 01:06:18 +0000 UTC Type:0 Mac:52:54:00:f9:5f:c4 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-091578 Clientid:01:52:54:00:f9:5f:c4}
	I0730 00:07:06.932916  503585 main.go:141] libmachine: (addons-091578) DBG | domain addons-091578 has defined IP address 192.168.39.214 and MAC address 52:54:00:f9:5f:c4 in network mk-addons-091578
	I0730 00:07:06.933083  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHPort
	I0730 00:07:06.933361  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHKeyPath
	I0730 00:07:06.933565  503585 main.go:141] libmachine: (addons-091578) Calling .GetSSHUsername
	I0730 00:07:06.933747  503585 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/addons-091578/id_rsa Username:docker}
	I0730 00:07:07.836894  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.866740722s)
	I0730 00:07:07.836939  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.864673997s)
	I0730 00:07:07.836952  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.836966  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.836987  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837013  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837039  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.80986944s)
	I0730 00:07:07.837081  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837088  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.756538737s)
	I0730 00:07:07.837098  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837110  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837106  503585 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.731751326s)
	I0730 00:07:07.837148  503585 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.731604591s)
	I0730 00:07:07.837166  503585 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0730 00:07:07.837210  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.513529974s)
	I0730 00:07:07.837227  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837236  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837329  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.493234037s)
	I0730 00:07:07.837346  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837354  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837409  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.492876354s)
	I0730 00:07:07.837442  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837455  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837523  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.444908018s)
	I0730 00:07:07.837539  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837547  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.837810  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.398148276s)
	I0730 00:07:07.837834  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.837843  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.838117  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.109488528s)
	I0730 00:07:07.838140  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.838149  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.838167  503585 node_ready.go:35] waiting up to 6m0s for node "addons-091578" to be "Ready" ...
	I0730 00:07:07.837121  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.838275  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.772144948s)
	W0730 00:07:07.838304  503585 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 00:07:07.838321  503585 retry.go:31] will retry after 333.750071ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 00:07:07.838377  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.211175238s)
	I0730 00:07:07.838403  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.838414  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840133  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840141  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840147  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840153  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840157  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840165  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840167  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840168  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840173  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840176  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840177  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840182  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840187  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840190  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840195  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840232  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840253  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840259  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840267  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840274  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840317  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840326  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840340  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840347  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840355  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840362  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840371  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840380  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840385  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840402  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840410  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840417  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840424  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840443  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840468  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840476  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840484  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840490  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840551  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840563  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840570  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840578  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840587  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840637  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840644  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840655  503585 addons.go:475] Verifying addon metrics-server=true in "addons-091578"
	I0730 00:07:07.840349  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.840677  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.840734  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840756  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840763  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840799  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.840831  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.840838  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.840994  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.841019  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.841031  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.841039  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.841048  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.841426  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.841451  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.841477  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.841483  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.841908  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.841919  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842267  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.842292  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842299  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842403  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.842437  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842444  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842577  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842588  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842790  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.842793  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.842804  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.842813  503585 addons.go:475] Verifying addon registry=true in "addons-091578"
	I0730 00:07:07.843070  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.843099  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.843105  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.843113  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.843120  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.843345  503585 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-091578 service yakd-dashboard -n yakd-dashboard
	
	I0730 00:07:07.844251  503585 out.go:177] * Verifying registry addon...
	I0730 00:07:07.846486  503585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0730 00:07:07.847042  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.847046  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.847056  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.847046  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.847070  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.847080  503585 addons.go:475] Verifying addon ingress=true in "addons-091578"
	I0730 00:07:07.847061  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.848511  503585 out.go:177] * Verifying ingress addon...
	I0730 00:07:07.850395  503585 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0730 00:07:07.852120  503585 node_ready.go:49] node "addons-091578" has status "Ready":"True"
	I0730 00:07:07.852141  503585 node_ready.go:38] duration metric: took 13.956338ms for node "addons-091578" to be "Ready" ...
	I0730 00:07:07.852152  503585 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:07:07.862259  503585 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0730 00:07:07.862279  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:07.862519  503585 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0730 00:07:07.862542  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:07.891923  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.891954  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.892386  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:07.892444  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.892453  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:07.893067  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:07.893088  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:07.893352  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:07.893374  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	W0730 00:07:07.893483  503585 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0730 00:07:07.896029  503585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fxsmn" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.924855  503585 pod_ready.go:92] pod "coredns-7db6d8ff4d-fxsmn" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:07.924882  503585 pod_ready.go:81] duration metric: took 28.821665ms for pod "coredns-7db6d8ff4d-fxsmn" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.924893  503585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lznwz" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.968336  503585 pod_ready.go:92] pod "coredns-7db6d8ff4d-lznwz" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:07.968375  503585 pod_ready.go:81] duration metric: took 43.473374ms for pod "coredns-7db6d8ff4d-lznwz" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.968392  503585 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.995657  503585 pod_ready.go:92] pod "etcd-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:07.995689  503585 pod_ready.go:81] duration metric: took 27.288893ms for pod "etcd-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:07.995700  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.011704  503585 pod_ready.go:92] pod "kube-apiserver-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:08.011739  503585 pod_ready.go:81] duration metric: took 16.031029ms for pod "kube-apiserver-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.011754  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.173065  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 00:07:08.244306  503585 pod_ready.go:92] pod "kube-controller-manager-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:08.244348  503585 pod_ready.go:81] duration metric: took 232.584167ms for pod "kube-controller-manager-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.244364  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4j5tl" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.343638  503585 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-091578" context rescaled to 1 replicas
	I0730 00:07:08.370288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:08.373607  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:08.647385  503585 pod_ready.go:92] pod "kube-proxy-4j5tl" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:08.647412  503585 pod_ready.go:81] duration metric: took 403.039444ms for pod "kube-proxy-4j5tl" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.647422  503585 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:08.832682  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.086499617s)
	I0730 00:07:08.832771  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:08.832778  503585 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.904747095s)
	I0730 00:07:08.832796  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:08.833308  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:08.833345  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:08.833364  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:08.833378  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:08.833389  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:08.833676  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:08.833693  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:08.833706  503585 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-091578"
	I0730 00:07:08.833737  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:08.834514  503585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 00:07:08.835318  503585 out.go:177] * Verifying csi-hostpath-driver addon...
	I0730 00:07:08.836926  503585 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0730 00:07:08.838025  503585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0730 00:07:08.838202  503585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0730 00:07:08.838226  503585 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0730 00:07:08.882257  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:08.885055  503585 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0730 00:07:08.885075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:08.891911  503585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0730 00:07:08.891936  503585 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0730 00:07:08.902649  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:09.018186  503585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 00:07:09.018215  503585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0730 00:07:09.042683  503585 pod_ready.go:92] pod "kube-scheduler-addons-091578" in "kube-system" namespace has status "Ready":"True"
	I0730 00:07:09.042716  503585 pod_ready.go:81] duration metric: took 395.286009ms for pod "kube-scheduler-addons-091578" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:09.042729  503585 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace to be "Ready" ...
	I0730 00:07:09.075767  503585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 00:07:09.344537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:09.350593  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:09.354290  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:09.844124  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:09.868807  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:09.876003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:10.251897  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.078780501s)
	I0730 00:07:10.251962  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.251983  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.252377  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.252429  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.252452  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.252470  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.252479  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:10.252793  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:10.252837  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.252856  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.371528  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:10.391220  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:10.394600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:10.472057  503585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.396230203s)
	I0730 00:07:10.472119  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.472130  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.472537  503585 main.go:141] libmachine: (addons-091578) DBG | Closing plugin on server side
	I0730 00:07:10.472585  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.472602  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.472628  503585 main.go:141] libmachine: Making call to close driver server
	I0730 00:07:10.472639  503585 main.go:141] libmachine: (addons-091578) Calling .Close
	I0730 00:07:10.472904  503585 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:07:10.472926  503585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:07:10.474895  503585 addons.go:475] Verifying addon gcp-auth=true in "addons-091578"
	I0730 00:07:10.476314  503585 out.go:177] * Verifying gcp-auth addon...
	I0730 00:07:10.478269  503585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0730 00:07:10.491650  503585 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0730 00:07:10.491678  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:10.844058  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:10.850531  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:10.853830  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:10.985707  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:11.048454  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:11.363968  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:11.364151  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:11.366534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:11.482243  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:11.843648  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:11.851109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:11.853914  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:11.983362  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:12.343029  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:12.350822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:12.353918  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:12.482620  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:12.843279  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:12.851392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:12.853852  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:12.981572  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:13.049277  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:13.343183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:13.351162  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:13.353619  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:13.482594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:13.844812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:13.850358  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:13.854003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:13.981783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:14.343820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:14.350640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:14.353725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:14.482423  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:14.843094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:14.851043  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:14.853516  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:14.982766  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:15.345791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:15.351650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:15.354306  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:15.482280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:15.548741  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:15.843969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:15.850979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:15.853723  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:15.982500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:16.343022  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:16.351425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:16.353870  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:16.482418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:16.844570  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:16.850630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:16.853427  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:16.982819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:17.343184  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:17.350630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:17.353605  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:17.482455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:17.844008  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:17.851249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:17.853367  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:17.982088  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:18.048069  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:18.343185  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:18.352097  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:18.353646  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:18.482836  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:18.843418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:18.852183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:18.854045  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:18.981929  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:19.343964  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:19.351341  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:19.353797  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:19.482691  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:19.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:19.850740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:19.853946  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:19.982383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:20.048285  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:20.343084  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:20.351306  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:20.353545  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:20.482522  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:20.843251  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:20.851325  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:20.854210  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:20.982252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:21.344543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:21.352046  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:21.357792  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:21.483335  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:21.843292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:21.851477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:21.854734  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:21.982514  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:22.051934  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:22.344558  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:22.351220  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:22.353738  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:22.482800  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:22.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:22.850770  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:22.855079  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:22.981891  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:23.343642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:23.350544  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:23.353887  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:23.482208  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:23.843137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:23.851554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:23.853890  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:23.981651  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:24.345138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:24.350965  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:24.354406  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:24.481869  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:24.549429  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:24.843310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:24.851671  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:24.853849  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:24.982661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:25.343633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:25.354673  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:25.357186  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:25.481940  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:25.843410  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:25.851271  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:25.853501  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:25.982553  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:26.343742  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:26.350666  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:26.353424  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:26.482416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:26.843750  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:26.851543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:26.853800  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:26.982796  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:27.049414  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:27.343927  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:27.350983  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:27.353387  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:27.482584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:27.844638  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:27.850444  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:27.853499  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:27.982418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:28.343453  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:28.352261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:28.354998  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:28.481881  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:28.843532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:28.850819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:28.854958  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:28.981706  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:29.345364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:29.351298  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:29.353697  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:29.482500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:29.549440  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:29.843766  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:29.851129  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:29.853438  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:29.982790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:30.344033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:30.352013  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:30.354945  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:30.482106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:30.843546  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:30.850394  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:30.854734  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:30.981845  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:31.343173  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:31.355052  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:31.355350  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:31.482412  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:31.844178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:31.851431  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:31.853676  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:31.983434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:32.048898  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:32.344537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:32.353248  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:32.355143  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:32.481697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:32.843449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:32.850768  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:32.854451  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:32.982512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:33.343226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:33.351094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:33.353960  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:33.481606  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:33.843603  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:33.850413  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:33.853352  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:33.982118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:34.343814  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:34.350773  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:34.353950  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:34.482323  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:34.549131  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:34.843930  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:34.853285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:34.854774  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:34.982570  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:35.343810  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:35.352082  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:35.354756  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:35.484072  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:35.844696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:35.851226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:35.853802  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:35.982383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:36.343921  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:36.350292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:36.353667  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:36.482191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:36.844525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:36.851191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:36.853773  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:36.982629  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:37.048807  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:37.344210  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:37.351733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:37.354545  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:37.482985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:37.843110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:37.850999  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:37.853463  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:37.982663  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:38.344275  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:38.351930  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:38.353896  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:38.481790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:38.843775  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:38.850960  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:38.853644  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:38.982774  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:39.050688  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:39.343792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:39.350633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:39.353791  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:39.482650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:39.844123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:39.851256  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:39.853702  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:39.982556  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:40.344233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:40.351500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:40.353800  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:40.483421  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:40.844735  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:40.851657  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:40.854045  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:40.982308  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:41.343507  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:41.351013  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:41.353446  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:41.482726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:41.548910  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:41.844094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:41.852902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:41.856902  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:41.981767  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:42.402445  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:42.402556  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:42.404965  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:42.482860  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:42.843998  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:42.850290  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:42.853958  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:42.981824  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:43.344249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:43.351537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:43.354198  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:43.482118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:43.843995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:43.851292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:43.854128  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:43.981989  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:44.049006  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:44.343658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:44.350951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:44.354188  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:44.482703  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:44.843727  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:44.851527  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:44.854098  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:44.982598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:45.343992  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:45.350988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:45.353584  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:45.482285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:45.844400  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:45.851200  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:45.853725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:45.982732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:46.050116  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:46.344117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:46.350925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:46.353680  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:46.482506  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:46.844820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:46.854828  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:46.856621  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:46.983261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:47.344886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:47.351811  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:47.354270  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:47.482478  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:47.843963  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:47.850749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:47.853836  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:47.981927  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:48.343967  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:48.351000  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:48.354272  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:48.482388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:48.548406  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:48.843728  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:48.850811  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:48.854731  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:48.982576  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:49.343680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:49.351103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:49.353483  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:49.481997  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:49.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:49.852118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:49.854888  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:49.981817  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:50.344186  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:50.350847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:50.353821  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:50.482301  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:50.548463  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:50.843569  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:50.850650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:50.853617  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:50.982253  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:51.342881  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:51.353405  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:51.354854  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:51.482035  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:51.843645  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:51.850702  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:51.854439  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:51.982420  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:52.345146  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:52.351908  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:52.354479  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:52.482650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:52.549236  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:52.843406  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:52.856627  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:52.856790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:52.981961  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:53.344377  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:53.351310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:53.353565  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:53.482627  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:53.844252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:53.851579  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:53.854121  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:53.983932  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:54.343420  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:54.350339  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:54.354100  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:54.482280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:54.551800  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:54.843645  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:54.850886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:54.854015  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:54.981828  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:55.343220  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:55.352132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:55.353792  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:55.482929  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:55.844070  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:55.850555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:55.853810  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:55.982822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:56.344304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:56.352439  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:56.354203  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:56.482975  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:56.843732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:56.851837  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:56.854234  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:56.982827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:57.048960  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:57.343973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:57.351189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:57.353751  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:57.482479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:57.843386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:57.850185  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:57.853866  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:57.982493  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:58.343319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:58.352882  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:58.354700  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:58.482480  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:58.844366  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:58.851528  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:58.853994  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:58.981842  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:59.343414  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:59.350561  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:59.353615  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:59.482406  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:07:59.549113  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:07:59.844593  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:07:59.851651  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:07:59.854335  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:07:59.982304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:00.344440  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:00.350713  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:00.353913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:00.481656  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:00.844904  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:00.850898  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:00.854095  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:00.981928  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:01.343196  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:01.356435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:01.360478  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:01.481858  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:01.549395  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:01.843259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:01.851569  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:01.853772  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:01.982694  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:02.346056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:02.351333  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:02.354280  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:02.481679  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:02.847773  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:02.852459  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:02.855414  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:02.982484  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:03.343519  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:03.351379  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:03.353705  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:03.482433  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:03.549702  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:03.844566  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:03.850863  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:03.853702  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:03.983614  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:04.344502  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:04.350752  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:04.354718  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:04.482062  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:04.844407  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:04.850879  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:04.854230  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:04.983988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:05.343112  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:05.351050  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:05.353725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:05.482813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:05.549778  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:05.844354  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:05.851349  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:05.854180  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:05.981475  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:06.346987  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:06.352486  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:06.355978  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:06.482580  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:06.843438  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:06.851851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:06.853714  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:06.982516  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:07.343375  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:07.352293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:07.353946  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:07.482301  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:07.843326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:07.851419  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:07.853847  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:07.982710  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:08.049580  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:08.343908  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:08.351812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:08.355340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:08.482296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:08.842516  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:08.851189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:08.853849  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:08.981641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:09.343837  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:09.351010  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:09.353640  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:09.482832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:09.843815  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:09.851064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:09.853931  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:09.981643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:10.346325  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:10.357065  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:10.357293  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:10.482178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:10.548688  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:10.843439  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:10.850015  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:10.853667  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:10.982477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:11.343232  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:11.351455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:11.353779  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:11.481598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:11.843994  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:11.851724  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:11.854685  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:11.982424  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:12.344106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:12.350926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:12.353582  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:12.482786  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:12.549239  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:12.843526  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:12.852261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:12.854703  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:12.983132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:13.343813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:13.351243  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:13.354540  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:13.482045  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:13.843789  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:13.851117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:13.853787  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:13.983451  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:14.344699  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:14.350063  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:14.353452  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:14.481880  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:14.844364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:14.850636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:14.854423  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:14.982218  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:15.049097  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:15.344347  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:15.350326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:15.355529  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:15.482207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:15.843723  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:15.850721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:15.854186  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:15.981960  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:16.344534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:16.351788  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:16.354742  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:16.482803  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:16.843356  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:16.850435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:16.853578  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:16.982633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:17.049386  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:17.343207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:17.352620  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:17.354727  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:17.482927  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:17.843952  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:17.853225  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:17.856305  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:17.982242  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:18.344537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:18.350414  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:18.353542  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:18.482713  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:18.843442  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:18.850775  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:18.853691  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:18.982612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:19.051202  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:19.345866  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:19.353119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:19.355450  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:19.482873  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:19.844424  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:19.852187  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:19.858651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:19.982372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:20.344142  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:20.350815  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:20.353766  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:20.483378  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:20.844183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:20.851249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:20.853904  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:20.981969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:21.346144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:21.354796  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:21.359119  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:21.481956  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:21.549839  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:21.843890  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:21.850853  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:21.854318  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:21.982291  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:22.344121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:22.350877  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:22.354379  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:22.482298  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:22.844064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:22.851529  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:22.854293  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:22.981883  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:23.343058  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:23.351332  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:23.353502  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:23.482281  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:23.843326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:23.851691  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:23.854082  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:23.981979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:24.049497  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:24.344473  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:24.350162  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:24.353749  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:24.482198  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:24.843791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:24.850425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:24.853272  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:24.982365  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:25.345663  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:25.350363  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:25.354068  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:25.482261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:25.844231  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:25.851270  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:25.853839  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:25.981852  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:26.346367  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:26.350230  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:26.353913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:26.481771  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:26.548681  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:26.843434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:26.850514  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:26.853922  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:26.981519  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:27.343641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:27.353773  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:27.355605  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:27.482525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:27.844320  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:27.850521  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:27.853641  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:27.982645  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:28.348429  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:28.350861  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:28.355083  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:28.481909  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:28.552666  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:28.844060  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:28.851689  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:28.853966  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:28.985900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:29.343608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:29.351025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:29.353693  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:29.482396  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:29.843837  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:29.850847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:29.853867  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:29.982249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:30.344425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:30.350400  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:30.353428  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:30.482106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:30.843951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:30.850610  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:30.853670  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:30.982417  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:31.049668  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:31.344158  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:31.351175  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:31.353829  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:31.481603  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:31.844127  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:31.851301  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:31.854833  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:31.981856  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:32.344985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:32.351313  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:32.354007  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:32.482946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:32.843128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:32.851210  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:32.854116  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:32.981886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:33.343119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:33.351149  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:33.353567  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:33.482662  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:33.548680  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:33.843784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:33.850809  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:33.853862  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:33.981873  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:34.345295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:34.351004  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:34.353981  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:34.482126  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:34.843244  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:34.851134  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:34.853550  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:34.982223  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:35.343307  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:35.351774  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:35.354003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:35.482555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:35.548963  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:35.844251  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:35.851128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:35.853572  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:35.982897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:36.345134  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:36.350876  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:36.354017  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:36.481841  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:36.844262  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:36.851323  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:36.853775  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:36.984494  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:37.343164  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:37.351257  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:37.354435  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:37.482369  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:37.843194  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:37.851449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:37.853983  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:37.981950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:38.049088  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:38.345455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:38.351277  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:38.353940  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:38.482205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:38.843435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:38.850014  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:38.853716  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:38.982130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:39.343234  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:39.351273  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:39.355103  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:39.482966  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:39.845312  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:39.851878  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:39.854780  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:39.984121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:40.344295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:40.350806  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:40.353466  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:40.482588  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:40.549160  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:40.842868  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:40.851126  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:40.853468  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:40.982288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:41.343843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:41.350739  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:41.353504  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:41.482136  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:41.843901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:41.850975  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:41.853513  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:41.982247  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:42.346865  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:42.351761  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:42.355171  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:42.482308  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:42.844479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:42.851105  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:42.853819  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:42.981848  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:43.049144  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:43.343956  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:43.351564  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:43.354796  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:43.483198  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:43.844033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:43.851819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:43.854506  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:43.982306  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:44.345602  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:44.351168  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:44.354494  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:44.481824  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:44.844103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:44.851630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:44.855578  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:44.981946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:45.344395  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:45.355760  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:45.357617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:45.482587  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:45.550116  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:45.844547  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:45.850740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:45.853815  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:45.981729  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:46.345207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:46.352490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:46.354123  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:46.481986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:46.844701  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:46.852432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:46.854034  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:46.981906  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:47.344586  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:47.351945  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:47.354351  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:47.482233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:47.844824  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:47.851380  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:47.853924  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:47.982700  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:48.049817  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:48.347034  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:48.351478  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:48.353949  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:48.481985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:48.852685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:48.864491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:48.864663  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:48.983055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:49.344491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:49.352129  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:49.355018  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:49.481982  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:49.843591  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:49.850305  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:49.854167  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:49.982203  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:50.348233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:50.350364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:50.354610  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:50.482432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:50.549021  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:50.843647  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:50.850923  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:50.853562  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:50.982696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:51.342946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:51.350756  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:51.353857  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:51.481525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:51.844180  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:51.851458  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:51.854096  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:51.982079  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:52.349661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:52.352121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:52.354572  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:52.483025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:52.549839  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:52.843914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:52.851765  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:52.854814  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:52.983173  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:53.343219  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:53.351458  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:53.354685  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:53.483175  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:53.842972  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:53.850871  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:53.854151  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:53.983215  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:54.347722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:54.350270  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:54.353723  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:54.482428  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:54.843280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:54.851289  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:54.853644  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:54.983269  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:55.047846  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:55.344405  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:55.350295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:55.354736  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:55.482816  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:55.845449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:55.850280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:55.854434  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:55.982579  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:56.342979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:56.351746  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:56.354704  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:56.482463  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:56.842955  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:56.851296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:56.853943  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:56.981715  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:57.048975  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:57.343928  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:57.350914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:57.353455  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:57.482444  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:57.844233  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:57.851886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:57.855779  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:57.982704  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:58.346944  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:58.350842  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:58.354157  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:58.482360  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:58.843329  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:58.851388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:58.854029  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:58.981513  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:59.343827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:59.351040  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:59.353885  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:59.483484  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:08:59.549538  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:08:59.844437  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:08:59.851150  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:08:59.854798  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:08:59.983489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:00.345368  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:00.351061  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:00.353477  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:00.482455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:00.843914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:00.850032  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:00.853651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:00.982840  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:01.343389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:01.350513  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:01.354476  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:01.481991  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:01.843544  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:01.851488  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:01.853996  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:01.981763  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:02.049081  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:02.345994  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:02.350760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:02.353779  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:02.481640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:02.844969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:02.850505  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:02.853974  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:02.981759  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:03.343685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:03.351395  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:03.353906  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:03.481667  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:03.844378  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:03.852379  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:03.854111  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:03.982059  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:04.342749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:04.350615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:04.353945  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:04.481898  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:04.548654  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:04.843359  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:04.850476  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:04.855158  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:04.982035  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:05.342811  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:05.351255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:05.354131  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:05.482194  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:05.842979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:05.851489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:05.854323  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:05.983365  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:06.343875  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:06.351085  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:06.353988  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:06.481688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:06.548855  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:06.843490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:06.850552  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:06.853634  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:06.982288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:07.344518  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:07.351104  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:07.354677  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:07.483706  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:07.843510  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:07.850563  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:07.853707  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:07.982642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:08.348269  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:08.351137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:08.353745  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:08.482937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:08.549239  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:08.842797  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:08.850715  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:08.853927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:08.981988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:09.343688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:09.350327  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:09.353933  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:09.481650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:09.843156  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:09.851075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:09.853239  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:09.981896  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:10.346709  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:10.350332  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:10.353871  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:10.481962  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:10.549326  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:10.846142  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:10.851380  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:10.854131  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:10.981950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:11.342760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:11.351475  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:11.354716  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:11.481980  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:11.844018  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:11.851143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:11.854126  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:11.982094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:12.343336  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:12.351435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:12.354564  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:12.481913  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:12.844409  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:12.858995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:12.859349  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:12.982371  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:13.047986  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:13.343733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:13.350474  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:13.353362  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:13.482596  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:13.843577  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:13.850561  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:13.854064  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:13.982587  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:14.344426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:14.351050  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:14.354039  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:14.481428  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:14.843405  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:14.851688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:14.854448  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:14.983435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:15.048413  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:15.344940  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:15.422148  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:15.422478  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:15.541033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:15.861231  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:15.871577  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:15.874902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:15.981679  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:16.344157  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:16.351109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:16.353556  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:16.482460  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:16.843443  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:16.851023  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:16.853617  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:16.983038  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:17.048937  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:17.344522  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:17.351988  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:17.354022  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:17.481705  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:17.843728  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:17.850529  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:17.853608  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:17.982364  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:18.345261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:18.352653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:18.354632  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:18.482504  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:18.846635  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:18.850787  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:18.854033  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:18.982416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:19.049862  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:19.343851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:19.350371  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:19.354969  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:19.481430  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:19.843765  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:19.853128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:19.854938  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:19.982577  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:20.344587  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:20.350943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:20.354586  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:20.482027  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:20.843489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:20.850852  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:20.854615  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:20.981951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:21.343534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:21.350676  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:21.353546  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:21.483091  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:21.552816  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:21.842926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:21.851041  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:21.853806  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:21.981822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:22.346652  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:22.350918  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:22.354512  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:22.483615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:22.844512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:22.851436  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:22.854024  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:22.981763  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:23.343474  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:23.351895  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:23.354507  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:23.482230  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:23.844211  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:23.851319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:23.853457  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:23.982146  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:24.048805  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:24.343636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:24.350868  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:24.354075  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:24.481397  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:24.843485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:24.850372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:24.853982  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:24.982052  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:25.343976  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:25.350534  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:25.354056  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:25.482473  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:25.844525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:25.855468  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:25.863745  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:25.983532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:26.049822  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:26.349151  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:26.351199  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:26.353278  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:26.481925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:26.846445  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:26.851304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:26.854300  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:26.982266  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:27.343820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:27.351209  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:27.354164  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:27.482529  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:27.843895  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:27.853028  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:27.855339  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:27.982314  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:28.346105  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:28.351483  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:28.353521  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:28.482641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:28.549109  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:28.843423  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:28.850303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:28.854205  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:28.981708  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:29.343628  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:29.351292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:29.353848  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:29.481742  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:29.843661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:29.853236  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:29.854806  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:29.983347  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:30.349599  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:30.351544  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:30.354270  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:30.482183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:30.549953  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:30.844600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:30.851661  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:30.854241  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:30.981847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:31.344143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:31.350855  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:31.354284  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:31.481630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:31.842954  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:31.851470  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:31.854089  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:31.982056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:32.346240  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:32.351135  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:32.353646  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:32.482680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:32.844381  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:32.850507  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:32.854280  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:32.982570  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:33.048670  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:33.343861  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:33.351032  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:33.353598  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:33.482957  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:33.843188  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:33.851435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:33.854144  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:33.981922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:34.351446  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:34.353901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:34.357389  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:34.482993  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:34.845060  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:34.850949  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:34.854678  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:34.982792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:35.048826  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:35.344547  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:35.350373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:35.353706  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:35.483599  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:35.844373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:35.851834  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:35.856373  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:35.981756  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:36.349389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:36.352624  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:36.355147  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:36.482110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:36.843579  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:36.850519  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:36.853931  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:36.981818  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:37.344409  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:37.350749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:37.353977  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:37.482103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:37.548506  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:37.844111  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:37.851263  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:37.853508  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:37.982524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:38.347265  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:38.352497  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:38.354292  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:38.482133  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:38.843490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:38.850839  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:38.854221  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:38.982191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:39.343888  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:39.351560  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:39.354463  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:39.482696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:39.548875  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:39.844144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:39.851387  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:39.853913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:39.982643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:40.346429  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:40.350484  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:40.353677  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:40.483250  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:40.844009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:40.851180  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:40.856458  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:40.982090  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:41.343520  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:41.350791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:41.353748  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:41.483255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:41.843787  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:41.851234  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:41.854287  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:41.982640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:42.050122  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:42.344677  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:42.350830  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:42.354080  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:42.482210  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:42.843872  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:42.855659  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:42.858299  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:42.982018  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:43.343608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:43.351199  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:43.353686  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:43.482598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:43.843617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:43.850653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:43.853919  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:43.981668  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:44.342936  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:44.352083  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:44.354469  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:44.482177  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:44.549552  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:44.843427  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:44.850367  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:44.854231  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:44.982181  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:45.344138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:45.351273  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:45.353524  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:45.482110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:45.844416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:45.851238  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:45.853626  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:45.983171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:46.343426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:46.350334  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:46.353825  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:46.482611  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:46.843235  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:46.851125  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:46.853770  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:46.982225  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:47.049151  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:47.344543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:47.350443  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:47.354275  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:47.481867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:47.844343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:47.850502  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:47.853652  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:47.982438  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:48.347810  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:48.350607  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:48.353649  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:48.482555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:48.844171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:48.853494  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:48.854415  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:48.982009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:49.344643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:49.351269  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:49.353644  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:49.482261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:49.549283  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:49.843334  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:49.851401  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:49.853868  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:49.982204  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:50.343728  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:50.350786  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:50.354156  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:50.482411  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:50.843727  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:50.850545  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:50.853536  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:50.982598  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:51.344164  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:51.351916  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:51.357436  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:51.482369  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:51.844060  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:51.852334  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:51.853973  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:51.981652  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:52.049061  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:52.345338  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:52.351560  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:52.354672  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:52.482718  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:52.843810  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:52.851255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:52.853987  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:52.981793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:53.343373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:53.354252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:53.356979  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:53.481414  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:53.843508  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:53.853113  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:53.855440  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:53.982390  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:54.346586  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:54.352354  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:54.354930  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:54.481359  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:54.548352  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:54.845821  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:54.852628  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:54.854722  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:54.982476  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:55.344900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:55.350103  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:55.354316  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:55.482206  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:55.844327  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:55.851669  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:55.853955  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:55.982154  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:56.346088  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:56.351852  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:56.354320  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:56.482538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:56.549083  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:56.843340  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:56.850116  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:56.853967  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:56.982193  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:57.344189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:57.351539  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:57.357821  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:57.809944  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:57.846290  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:57.851755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:57.854733  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:57.982581  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:58.349265  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:58.354388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:58.355035  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:58.483040  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:58.550464  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:09:58.844309  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:58.854484  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:58.854588  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:58.982953  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:59.346793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:59.352954  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:59.355019  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:59.482527  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:09:59.846095  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:09:59.851477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:09:59.854072  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:09:59.982076  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:00.347721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:00.355202  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:00.355267  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:00.482896  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:00.846926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:00.855323  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:00.855922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:00.982909  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:01.050126  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:01.347150  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:01.350847  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:01.354451  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:01.482319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:01.844201  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:01.851549  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:01.853890  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:01.982191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:02.344284  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:02.350744  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:02.353914  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:02.481748  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:02.844261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:02.851500  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:02.854311  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:02.982624  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:03.344311  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:03.352079  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:03.353439  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:03.482737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:03.549136  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:03.845649  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:03.850727  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:03.854379  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:03.982343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:04.345094  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:04.351384  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:04.354356  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:04.482499  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:04.845184  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:04.861119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:04.861405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:04.982489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:05.345111  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:05.351109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:05.353521  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:05.483466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:05.844353  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:05.852006  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:05.854112  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:05.981990  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:06.049577  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:06.363740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:06.366704  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:06.371160  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:06.482611  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:06.844268  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:06.851280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:06.853864  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:06.981892  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:07.345906  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:07.352657  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:07.356175  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:07.481744  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:07.844120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:07.852386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:07.853896  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:07.981908  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:08.345984  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:08.352178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:08.354975  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:08.482326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:08.548340  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:08.846677  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:08.852139  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:08.854927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:08.981863  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:09.343676  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:09.350865  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:09.353850  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:09.482091  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:09.843876  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:09.850863  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:09.853765  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:09.981733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:10.343805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:10.353665  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:10.358100  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:10.481821  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:10.549143  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:10.843684  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:10.851156  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:10.853700  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:10.982711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:11.343386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:11.350404  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:11.353563  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:11.482503  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:11.843471  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:11.850804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:11.854394  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:11.982205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:12.344559  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:12.356493  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:12.360685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:12.482904  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:12.549689  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:12.845362  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:12.851444  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:12.854192  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:12.982144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:13.343490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:13.350748  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:13.354164  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:13.482393  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:13.843490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:13.851140  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:13.853955  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:13.981899  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:14.345075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:14.353749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:14.356306  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:14.482935  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:14.845206  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:14.851418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:14.854953  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:14.982832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:15.050718  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:15.344698  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:15.352147  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:15.354814  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:15.482914  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:15.843594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:15.850612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:15.854250  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:15.981822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:16.344352  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:16.355068  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:16.355223  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:16.482512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:16.843877  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:16.852165  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:16.854206  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:16.983784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:17.343952  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:17.351134  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:17.354320  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:17.482316  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:17.548165  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:17.843235  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:17.854547  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:17.854757  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:17.982635  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:18.344289  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:18.355450  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:18.355895  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:18.482831  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:18.843523  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:18.850790  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:18.853690  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:18.982812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:19.343805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:19.352950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:19.354818  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:19.481408  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:19.548450  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:19.843617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:19.851698  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:19.854050  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:19.981911  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:20.343486  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:20.350919  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:20.353484  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:20.482432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:20.844059  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:20.852064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:20.853902  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:20.981606  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:21.343087  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:21.351333  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:21.353942  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:21.482870  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:21.549780  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:21.843885  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:21.851308  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:21.854091  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:21.981724  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:22.343302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:22.356583  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:22.357293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:22.482332  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:22.844030  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:22.851904  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:22.854405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:22.982430  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:23.345342  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:23.352058  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:23.354567  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:23.482502  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:23.551285  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:23.843867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:23.851747  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:23.854235  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:23.982743  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:24.349615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:24.352798  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:24.354361  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:24.484973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:24.843941  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:24.851478  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:24.854826  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:24.981950  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:25.343330  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:25.352718  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:25.354399  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:25.482351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:25.844277  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:25.851538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:25.853730  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:25.982701  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:26.049014  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:26.344072  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:26.357295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:26.359202  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:26.482140  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:26.842829  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:26.851056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:26.854715  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:26.981985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:27.343390  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:27.351538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:27.354476  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:27.482428  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:27.843658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:27.850948  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:27.853870  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:27.981973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:28.049466  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:28.343518  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:28.351517  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:28.354329  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:28.482366  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:28.844149  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:28.851882  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:28.854366  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:28.982672  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:29.343695  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:29.350530  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:29.353902  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:29.482133  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:29.845869  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:29.850937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:29.854579  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:29.982047  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:30.343525  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:30.358083  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:30.358549  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:30.482345  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:30.549555  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:30.844496  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:30.851255  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:30.853781  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:30.981838  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:31.343630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:31.350653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:31.353761  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:31.482422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:31.843777  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:31.850969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:31.854433  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:31.982508  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:32.343540  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:32.352792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:32.355565  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:32.482895  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:32.843319  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:32.851832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:32.854804  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:32.983304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:33.048824  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:33.343844  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:33.355020  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:33.358766  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:33.482999  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:33.844041  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:33.857517  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:33.861002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:33.982350  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:34.345104  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:34.357915  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:34.358115  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:34.485315  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:34.845168  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:34.853582  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:34.855171  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:34.981985  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:35.049258  503585 pod_ready.go:102] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"False"
	I0730 00:10:35.343168  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:35.351617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:35.357917  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:35.482389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:35.551202  503585 pod_ready.go:92] pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace has status "Ready":"True"
	I0730 00:10:35.551227  503585 pod_ready.go:81] duration metric: took 3m26.508489745s for pod "metrics-server-c59844bb4-4z28f" in "kube-system" namespace to be "Ready" ...
	I0730 00:10:35.551241  503585 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ln654" in "kube-system" namespace to be "Ready" ...
	I0730 00:10:35.555391  503585 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ln654" in "kube-system" namespace has status "Ready":"True"
	I0730 00:10:35.555416  503585 pod_ready.go:81] duration metric: took 4.165642ms for pod "nvidia-device-plugin-daemonset-ln654" in "kube-system" namespace to be "Ready" ...
	I0730 00:10:35.555447  503585 pod_ready.go:38] duration metric: took 3m27.703278328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:10:35.555497  503585 api_server.go:52] waiting for apiserver process to appear ...
	I0730 00:10:35.555545  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 00:10:35.555620  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 00:10:35.603203  503585 cri.go:89] found id: "cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:35.603232  503585 cri.go:89] found id: ""
	I0730 00:10:35.603243  503585 logs.go:276] 1 containers: [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363]
	I0730 00:10:35.603298  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.607385  503585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 00:10:35.607465  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 00:10:35.647403  503585 cri.go:89] found id: "499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:35.647426  503585 cri.go:89] found id: ""
	I0730 00:10:35.647438  503585 logs.go:276] 1 containers: [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9]
	I0730 00:10:35.647499  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.651234  503585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 00:10:35.651309  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 00:10:35.686655  503585 cri.go:89] found id: "f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:35.686684  503585 cri.go:89] found id: ""
	I0730 00:10:35.686694  503585 logs.go:276] 1 containers: [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330]
	I0730 00:10:35.686763  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.690805  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 00:10:35.690875  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 00:10:35.725597  503585 cri.go:89] found id: "3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:35.725619  503585 cri.go:89] found id: ""
	I0730 00:10:35.725627  503585 logs.go:276] 1 containers: [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568]
	I0730 00:10:35.725679  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.729678  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 00:10:35.729748  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 00:10:35.764742  503585 cri.go:89] found id: "ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:35.764769  503585 cri.go:89] found id: ""
	I0730 00:10:35.764778  503585 logs.go:276] 1 containers: [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d]
	I0730 00:10:35.764844  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.769112  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 00:10:35.769186  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 00:10:35.809087  503585 cri.go:89] found id: "60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:35.809109  503585 cri.go:89] found id: ""
	I0730 00:10:35.809119  503585 logs.go:276] 1 containers: [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952]
	I0730 00:10:35.809184  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:35.813304  503585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 00:10:35.813387  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 00:10:35.845044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:35.852762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:35.855462  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:35.861476  503585 cri.go:89] found id: ""
	I0730 00:10:35.861498  503585 logs.go:276] 0 containers: []
	W0730 00:10:35.861508  503585 logs.go:278] No container was found matching "kindnet"
	I0730 00:10:35.861521  503585 logs.go:123] Gathering logs for kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] ...
	I0730 00:10:35.861539  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:35.895544  503585 logs.go:123] Gathering logs for container status ...
	I0730 00:10:35.895578  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 00:10:35.942760  503585 logs.go:123] Gathering logs for dmesg ...
	I0730 00:10:35.942792  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 00:10:35.957530  503585 logs.go:123] Gathering logs for describe nodes ...
	I0730 00:10:35.957566  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 00:10:35.981982  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:36.086048  503585 logs.go:123] Gathering logs for kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] ...
	I0730 00:10:36.086090  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:36.142889  503585 logs.go:123] Gathering logs for kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] ...
	I0730 00:10:36.142921  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:36.185336  503585 logs.go:123] Gathering logs for kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] ...
	I0730 00:10:36.185371  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:36.246428  503585 logs.go:123] Gathering logs for CRI-O ...
	I0730 00:10:36.246469  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 00:10:36.344310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:36.352815  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:36.354505  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:36.482558  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:36.845109  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:36.851658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:36.853927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:36.981643  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:37.149702  503585 logs.go:123] Gathering logs for kubelet ...
	I0730 00:10:37.149757  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 00:10:37.227004  503585 logs.go:123] Gathering logs for etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] ...
	I0730 00:10:37.227050  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:37.272012  503585 logs.go:123] Gathering logs for coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] ...
	I0730 00:10:37.272069  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:37.344949  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:37.352071  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:37.355240  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:37.482438  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:37.844653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:37.851064  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:37.853735  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:37.983425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:38.344464  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:38.355798  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:38.357710  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:38.482886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:38.844075  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:38.851062  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:38.854025  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:38.982262  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:39.343227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:39.351804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:39.354242  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:39.482851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:39.815157  503585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:10:39.835113  503585 api_server.go:72] duration metric: took 3m40.494284008s to wait for apiserver process to appear ...
	I0730 00:10:39.835156  503585 api_server.go:88] waiting for apiserver healthz status ...
	I0730 00:10:39.835206  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 00:10:39.835283  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 00:10:39.843434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:39.852597  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:39.855118  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:39.873982  503585 cri.go:89] found id: "cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:39.874004  503585 cri.go:89] found id: ""
	I0730 00:10:39.874013  503585 logs.go:276] 1 containers: [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363]
	I0730 00:10:39.874094  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:39.878094  503585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 00:10:39.878171  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 00:10:39.922245  503585 cri.go:89] found id: "499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:39.922266  503585 cri.go:89] found id: ""
	I0730 00:10:39.922274  503585 logs.go:276] 1 containers: [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9]
	I0730 00:10:39.922328  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:39.926115  503585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 00:10:39.926161  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 00:10:39.959528  503585 cri.go:89] found id: "f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:39.959553  503585 cri.go:89] found id: ""
	I0730 00:10:39.959561  503585 logs.go:276] 1 containers: [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330]
	I0730 00:10:39.959615  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:39.964358  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 00:10:39.964425  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 00:10:39.982418  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:39.999510  503585 cri.go:89] found id: "3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:39.999533  503585 cri.go:89] found id: ""
	I0730 00:10:39.999541  503585 logs.go:276] 1 containers: [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568]
	I0730 00:10:39.999605  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:40.003701  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 00:10:40.003770  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 00:10:40.038352  503585 cri.go:89] found id: "ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:40.038380  503585 cri.go:89] found id: ""
	I0730 00:10:40.038391  503585 logs.go:276] 1 containers: [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d]
	I0730 00:10:40.038461  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:40.042807  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 00:10:40.042871  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 00:10:40.077327  503585 cri.go:89] found id: "60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:40.077353  503585 cri.go:89] found id: ""
	I0730 00:10:40.077363  503585 logs.go:276] 1 containers: [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952]
	I0730 00:10:40.077414  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:40.081214  503585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 00:10:40.081300  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 00:10:40.114924  503585 cri.go:89] found id: ""
	I0730 00:10:40.114963  503585 logs.go:276] 0 containers: []
	W0730 00:10:40.114975  503585 logs.go:278] No container was found matching "kindnet"
	I0730 00:10:40.114987  503585 logs.go:123] Gathering logs for dmesg ...
	I0730 00:10:40.115004  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 00:10:40.128502  503585 logs.go:123] Gathering logs for kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] ...
	I0730 00:10:40.128532  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:40.169837  503585 logs.go:123] Gathering logs for kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] ...
	I0730 00:10:40.169873  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:40.210979  503585 logs.go:123] Gathering logs for kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] ...
	I0730 00:10:40.211010  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:40.265650  503585 logs.go:123] Gathering logs for CRI-O ...
	I0730 00:10:40.265699  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 00:10:40.353141  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:40.355590  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:40.360791  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:40.482739  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:40.843885  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:40.850745  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:40.854208  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:40.982371  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:41.027434  503585 logs.go:123] Gathering logs for kubelet ...
	I0730 00:10:41.027493  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 00:10:41.097577  503585 logs.go:123] Gathering logs for describe nodes ...
	I0730 00:10:41.097622  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 00:10:41.216178  503585 logs.go:123] Gathering logs for kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] ...
	I0730 00:10:41.216215  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:41.271450  503585 logs.go:123] Gathering logs for etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] ...
	I0730 00:10:41.271496  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:41.322552  503585 logs.go:123] Gathering logs for coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] ...
	I0730 00:10:41.322595  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:41.343739  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:41.352232  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:41.355513  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:41.364798  503585 logs.go:123] Gathering logs for container status ...
	I0730 00:10:41.364827  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 00:10:41.482803  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:41.844454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:41.851002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:41.853645  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:41.983144  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:42.343225  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:42.353549  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:42.355297  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:42.482455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:42.843589  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:42.850533  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:42.853867  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:42.981778  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:43.343809  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:43.350945  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:43.353374  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:43.481825  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:43.844317  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:43.851663  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:43.854069  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:43.910633  503585 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I0730 00:10:43.915808  503585 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I0730 00:10:43.916853  503585 api_server.go:141] control plane version: v1.30.3
	I0730 00:10:43.916878  503585 api_server.go:131] duration metric: took 4.081714371s to wait for apiserver health ...
	I0730 00:10:43.916887  503585 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 00:10:43.916914  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 00:10:43.916965  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 00:10:43.951907  503585 cri.go:89] found id: "cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:43.951937  503585 cri.go:89] found id: ""
	I0730 00:10:43.951947  503585 logs.go:276] 1 containers: [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363]
	I0730 00:10:43.952006  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:43.955910  503585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 00:10:43.955972  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 00:10:43.982592  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:43.996176  503585 cri.go:89] found id: "499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:43.996201  503585 cri.go:89] found id: ""
	I0730 00:10:43.996212  503585 logs.go:276] 1 containers: [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9]
	I0730 00:10:43.996274  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.000468  503585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 00:10:44.000537  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 00:10:44.034889  503585 cri.go:89] found id: "f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:44.034918  503585 cri.go:89] found id: ""
	I0730 00:10:44.034929  503585 logs.go:276] 1 containers: [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330]
	I0730 00:10:44.034985  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.038959  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 00:10:44.039042  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 00:10:44.077320  503585 cri.go:89] found id: "3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:44.077344  503585 cri.go:89] found id: ""
	I0730 00:10:44.077352  503585 logs.go:276] 1 containers: [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568]
	I0730 00:10:44.077405  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.081536  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 00:10:44.081613  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 00:10:44.116042  503585 cri.go:89] found id: "ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:44.116067  503585 cri.go:89] found id: ""
	I0730 00:10:44.116075  503585 logs.go:276] 1 containers: [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d]
	I0730 00:10:44.116131  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.120107  503585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 00:10:44.120183  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 00:10:44.154944  503585 cri.go:89] found id: "60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:44.154973  503585 cri.go:89] found id: ""
	I0730 00:10:44.154985  503585 logs.go:276] 1 containers: [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952]
	I0730 00:10:44.155075  503585 ssh_runner.go:195] Run: which crictl
	I0730 00:10:44.159060  503585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 00:10:44.159139  503585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 00:10:44.192874  503585 cri.go:89] found id: ""
	I0730 00:10:44.192902  503585 logs.go:276] 0 containers: []
	W0730 00:10:44.192911  503585 logs.go:278] No container was found matching "kindnet"
	I0730 00:10:44.192922  503585 logs.go:123] Gathering logs for coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] ...
	I0730 00:10:44.192949  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330"
	I0730 00:10:44.228061  503585 logs.go:123] Gathering logs for kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] ...
	I0730 00:10:44.228091  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568"
	I0730 00:10:44.270787  503585 logs.go:123] Gathering logs for kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] ...
	I0730 00:10:44.270827  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952"
	I0730 00:10:44.335225  503585 logs.go:123] Gathering logs for CRI-O ...
	I0730 00:10:44.335262  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 00:10:44.343201  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:44.351303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:44.354890  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:44.482251  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:44.845621  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:44.851435  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:44.854473  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:44.982504  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:44.996224  503585 logs.go:123] Gathering logs for etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] ...
	I0730 00:10:44.996278  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9"
	I0730 00:10:45.037212  503585 logs.go:123] Gathering logs for kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] ...
	I0730 00:10:45.037246  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d"
	I0730 00:10:45.070880  503585 logs.go:123] Gathering logs for container status ...
	I0730 00:10:45.070910  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 00:10:45.118520  503585 logs.go:123] Gathering logs for kubelet ...
	I0730 00:10:45.118556  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 00:10:45.189686  503585 logs.go:123] Gathering logs for dmesg ...
	I0730 00:10:45.189729  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 00:10:45.203951  503585 logs.go:123] Gathering logs for describe nodes ...
	I0730 00:10:45.203985  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 00:10:45.319722  503585 logs.go:123] Gathering logs for kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] ...
	I0730 00:10:45.319761  503585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363"
	I0730 00:10:45.346303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:45.351120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:45.353855  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:45.482392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:45.843658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:45.851237  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:45.854773  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:45.983353  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:46.343138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:46.352343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:46.354797  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:46.482102  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:46.843383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:46.850680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:46.854393  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:46.990454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:47.344017  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:47.350808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:47.354409  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:47.482231  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:47.843149  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:47.850986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:47.853278  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:47.885665  503585 system_pods.go:59] 18 kube-system pods found
	I0730 00:10:47.885699  503585 system_pods.go:61] "coredns-7db6d8ff4d-lznwz" [547ad840-f72d-4dd5-b452-c9368370f5f9] Running
	I0730 00:10:47.885709  503585 system_pods.go:61] "csi-hostpath-attacher-0" [75121907-e5d8-4377-a36b-77be23e5b05d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0730 00:10:47.885719  503585 system_pods.go:61] "csi-hostpath-resizer-0" [9b84b86e-e802-4cfe-8a48-95f95a7ef99a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0730 00:10:47.885729  503585 system_pods.go:61] "csi-hostpathplugin-52djf" [6f0e9aeb-dcc9-4b01-8442-8c1f93583cea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0730 00:10:47.885736  503585 system_pods.go:61] "etcd-addons-091578" [c1861038-1f13-43ee-8e13-3d94f488ca4b] Running
	I0730 00:10:47.885742  503585 system_pods.go:61] "kube-apiserver-addons-091578" [14441c40-e373-40a1-8b22-2f7d6acfaf0c] Running
	I0730 00:10:47.885746  503585 system_pods.go:61] "kube-controller-manager-addons-091578" [84d24c29-acf6-42d5-b516-b1d852d1adfd] Running
	I0730 00:10:47.885754  503585 system_pods.go:61] "kube-ingress-dns-minikube" [7057a5f6-2896-4f06-9824-0772c339905f] Running
	I0730 00:10:47.885760  503585 system_pods.go:61] "kube-proxy-4j5tl" [d252b4fe-1396-4ebd-9108-a3a6874b8245] Running
	I0730 00:10:47.885764  503585 system_pods.go:61] "kube-scheduler-addons-091578" [a4346809-fd43-484e-b6a1-165f50b28ad8] Running
	I0730 00:10:47.885770  503585 system_pods.go:61] "metrics-server-c59844bb4-4z28f" [8efac445-c550-499b-9e0a-05b83969bc15] Running
	I0730 00:10:47.885777  503585 system_pods.go:61] "nvidia-device-plugin-daemonset-ln654" [f07b96ab-d52e-45d8-9c29-00c89fc8619e] Running
	I0730 00:10:47.885787  503585 system_pods.go:61] "registry-698f998955-mczh9" [99907a0e-3d47-408f-b8ea-3725dee9f03b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0730 00:10:47.885800  503585 system_pods.go:61] "registry-proxy-nqxzf" [613243a6-ea19-4999-ad5f-ca96c8e11bfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0730 00:10:47.885814  503585 system_pods.go:61] "snapshot-controller-745499f584-jc7wn" [b3945078-d405-4d3b-86fa-941fda4173df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.885827  503585 system_pods.go:61] "snapshot-controller-745499f584-q92j4" [fc3f1272-bf9e-40bd-9504-79a1529e0738] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.885832  503585 system_pods.go:61] "storage-provisioner" [383d9f3e-a160-4fa0-bf37-8472c0c4607c] Running
	I0730 00:10:47.885840  503585 system_pods.go:61] "tiller-deploy-6677d64bcd-7kxlp" [e02f9185-5b7f-40f5-baf0-64a0c45bc97e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0730 00:10:47.885849  503585 system_pods.go:74] duration metric: took 3.968954532s to wait for pod list to return data ...
	I0730 00:10:47.885862  503585 default_sa.go:34] waiting for default service account to be created ...
	I0730 00:10:47.887724  503585 default_sa.go:45] found service account: "default"
	I0730 00:10:47.887744  503585 default_sa.go:55] duration metric: took 1.875431ms for default service account to be created ...
	I0730 00:10:47.887751  503585 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 00:10:47.895217  503585 system_pods.go:86] 18 kube-system pods found
	I0730 00:10:47.895246  503585 system_pods.go:89] "coredns-7db6d8ff4d-lznwz" [547ad840-f72d-4dd5-b452-c9368370f5f9] Running
	I0730 00:10:47.895255  503585 system_pods.go:89] "csi-hostpath-attacher-0" [75121907-e5d8-4377-a36b-77be23e5b05d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0730 00:10:47.895262  503585 system_pods.go:89] "csi-hostpath-resizer-0" [9b84b86e-e802-4cfe-8a48-95f95a7ef99a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0730 00:10:47.895271  503585 system_pods.go:89] "csi-hostpathplugin-52djf" [6f0e9aeb-dcc9-4b01-8442-8c1f93583cea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0730 00:10:47.895283  503585 system_pods.go:89] "etcd-addons-091578" [c1861038-1f13-43ee-8e13-3d94f488ca4b] Running
	I0730 00:10:47.895288  503585 system_pods.go:89] "kube-apiserver-addons-091578" [14441c40-e373-40a1-8b22-2f7d6acfaf0c] Running
	I0730 00:10:47.895292  503585 system_pods.go:89] "kube-controller-manager-addons-091578" [84d24c29-acf6-42d5-b516-b1d852d1adfd] Running
	I0730 00:10:47.895298  503585 system_pods.go:89] "kube-ingress-dns-minikube" [7057a5f6-2896-4f06-9824-0772c339905f] Running
	I0730 00:10:47.895302  503585 system_pods.go:89] "kube-proxy-4j5tl" [d252b4fe-1396-4ebd-9108-a3a6874b8245] Running
	I0730 00:10:47.895308  503585 system_pods.go:89] "kube-scheduler-addons-091578" [a4346809-fd43-484e-b6a1-165f50b28ad8] Running
	I0730 00:10:47.895312  503585 system_pods.go:89] "metrics-server-c59844bb4-4z28f" [8efac445-c550-499b-9e0a-05b83969bc15] Running
	I0730 00:10:47.895319  503585 system_pods.go:89] "nvidia-device-plugin-daemonset-ln654" [f07b96ab-d52e-45d8-9c29-00c89fc8619e] Running
	I0730 00:10:47.895325  503585 system_pods.go:89] "registry-698f998955-mczh9" [99907a0e-3d47-408f-b8ea-3725dee9f03b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0730 00:10:47.895331  503585 system_pods.go:89] "registry-proxy-nqxzf" [613243a6-ea19-4999-ad5f-ca96c8e11bfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0730 00:10:47.895339  503585 system_pods.go:89] "snapshot-controller-745499f584-jc7wn" [b3945078-d405-4d3b-86fa-941fda4173df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.895349  503585 system_pods.go:89] "snapshot-controller-745499f584-q92j4" [fc3f1272-bf9e-40bd-9504-79a1529e0738] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0730 00:10:47.895353  503585 system_pods.go:89] "storage-provisioner" [383d9f3e-a160-4fa0-bf37-8472c0c4607c] Running
	I0730 00:10:47.895360  503585 system_pods.go:89] "tiller-deploy-6677d64bcd-7kxlp" [e02f9185-5b7f-40f5-baf0-64a0c45bc97e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0730 00:10:47.895366  503585 system_pods.go:126] duration metric: took 7.609576ms to wait for k8s-apps to be running ...
	I0730 00:10:47.895376  503585 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 00:10:47.895423  503585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:10:47.910714  503585 system_svc.go:56] duration metric: took 15.32925ms WaitForService to wait for kubelet
	I0730 00:10:47.910743  503585 kubeadm.go:582] duration metric: took 3m48.56992122s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:10:47.910766  503585 node_conditions.go:102] verifying NodePressure condition ...
	I0730 00:10:47.913597  503585 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:10:47.913624  503585 node_conditions.go:123] node cpu capacity is 2
	I0730 00:10:47.913650  503585 node_conditions.go:105] duration metric: took 2.879925ms to run NodePressure ...
	I0730 00:10:47.913662  503585 start.go:241] waiting for startup goroutines ...
	I0730 00:10:47.981679  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:48.343843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:48.357347  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:48.357970  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:48.481804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:48.844562  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:48.851410  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:48.853651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:48.982616  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:49.343608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:49.350817  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:49.353881  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:49.481761  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:49.843361  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:49.852792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:49.854061  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:49.982056  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:50.344833  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:50.353855  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:50.355643  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:50.482581  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:50.844372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:50.851285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:50.853817  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:50.981719  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:51.343922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:51.350720  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:51.353887  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:51.482382  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:51.843566  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:51.852171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:51.854294  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:51.982008  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:52.343802  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:52.352540  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:52.354677  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:52.482741  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:52.843685  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:52.850565  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:52.853989  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:52.982905  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:53.345793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:53.351600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:53.354452  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:53.485104  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:53.843252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:53.851943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:53.854166  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:53.981792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:54.343696  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:54.355809  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:54.355952  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:54.482191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:54.843227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:54.851594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:54.854155  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:54.983411  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:55.343698  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:55.350697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:55.354089  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:55.482011  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:55.844078  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:55.851389  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:55.854316  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:55.982563  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:56.343553  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:56.354106  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:56.355401  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:56.482840  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:56.859972  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:56.873553  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:56.875485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:56.982568  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:57.343858  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:57.351850  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:57.363670  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:57.482466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:57.843937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:57.851921  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:57.854391  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:57.982625  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:58.344381  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:58.357977  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:58.359524  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:58.482310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:58.843351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:58.851740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:58.854095  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:58.982211  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:59.344329  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:59.354722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:59.357846  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:59.482621  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:10:59.843589  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:10:59.850910  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:10:59.854152  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:10:59.981562  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:00.343639  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:00.353145  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:00.356235  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:00.482292  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:00.843203  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:00.851501  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:00.853783  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:00.981804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:01.343606  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:01.350953  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:01.355008  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:01.482221  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:01.842882  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:01.850807  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:01.855151  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:01.982062  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:02.343772  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:02.357098  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:02.357107  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:02.482268  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:02.843325  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:02.853096  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:02.854769  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:02.982576  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:03.343620  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:03.353710  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:03.355593  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:03.482293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:03.843202  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:03.850907  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:03.853553  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:03.982178  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:04.343086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:04.354291  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:04.354997  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:04.481941  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:04.843733  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:04.850843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:04.854149  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:04.982032  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:05.344101  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:05.351130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:05.354170  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:05.482346  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:05.844639  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:05.851973  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:05.855078  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:05.982044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:06.343031  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:06.354351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:06.355296  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:06.482086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:06.843674  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:06.850591  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:06.853447  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:06.982880  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:07.344033  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:07.351195  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:07.353407  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:07.482452  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:07.843724  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:07.850487  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:07.853762  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:07.982646  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:08.343804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:08.353268  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:08.355516  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:08.482113  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:08.843042  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:08.851207  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:08.855070  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:08.981595  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:09.343556  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:09.356915  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:09.360138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:09.482399  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:09.843781  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:09.851670  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:09.854022  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:09.981655  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:10.343921  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:10.352925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:10.355373  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:10.482259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:10.843254  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:10.851132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:10.853710  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:10.982672  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:11.343755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:11.350734  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:11.354003  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:11.481786  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:11.843572  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:11.850775  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:11.853824  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:11.982454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:12.343351  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:12.355788  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:12.356016  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:12.482261  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:12.843317  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:12.852015  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:12.854340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:12.982242  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:13.343341  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:13.350485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:13.353400  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:13.482533  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:13.844059  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:13.851088  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:13.853823  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:13.982252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:14.343296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:14.353564  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:14.354716  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:14.482762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:14.844423  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:14.850485  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:14.853662  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:14.982288  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:15.343326  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:15.351085  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:15.353615  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:15.482833  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:15.843936  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:15.850919  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:15.854351  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:15.982419  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:16.345120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:16.356472  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:16.357450  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:16.482021  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:16.844071  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:16.850886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:16.853547  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:16.982181  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:17.342867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:17.350760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:17.353694  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:17.482712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:17.843600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:17.850492  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:17.853690  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:17.982804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:18.344311  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:18.353306  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:18.355614  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:18.482571  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:18.843869  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:18.850722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:18.855137  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:18.981885  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:19.343808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:19.351397  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:19.353896  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:19.482996  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:19.843864  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:19.851081  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:19.853285  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:19.982754  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:20.343709  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:20.352622  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:20.355471  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:20.482036  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:20.843555  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:20.850449  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:20.853363  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:20.982165  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:21.343146  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:21.352039  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:21.353849  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:21.481755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:21.844112  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:21.851057  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:21.853763  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:21.982708  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:22.344856  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:22.353866  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:22.355113  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:22.482584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:22.843932  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:22.851096  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:22.853589  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:22.982276  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:23.343902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:23.351118  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:23.353765  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:23.482123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:23.843141  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:23.851136  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:23.853418  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:23.982409  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:24.343378  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:24.354954  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:24.355211  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:24.481933  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:24.844890  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:24.850557  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:24.853487  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:24.982827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:25.343766  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:25.352143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:25.354426  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:25.482788  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:25.844130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:25.851497  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:25.853717  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:25.982920  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:26.344009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:26.354278  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:26.356243  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:26.482147  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:26.843658  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:26.850700  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:26.854226  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:26.982157  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:27.345699  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:27.358901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:27.359017  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:27.483227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:27.843540  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:27.850413  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:27.854157  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:27.982422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:28.343280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:28.353974  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:28.355163  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:28.483045  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:28.845208  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:28.850711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:28.854274  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:28.981987  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:29.344349  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:29.351020  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:29.354292  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:29.483259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:29.843193  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:29.851388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:29.853897  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:29.981536  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:30.343464  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:30.354940  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:30.355005  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:30.482044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:30.844171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:30.851784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:30.854260  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:30.982524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:31.343669  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:31.351554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:31.353721  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:31.482687  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:31.845044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:31.850310  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:31.853852  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:31.982082  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:32.343132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:32.356009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:32.356414  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:32.482505  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:32.843650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:32.850665  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:32.853828  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:32.983700  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:33.343827  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:33.350880  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:33.354240  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:33.481745  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:33.843433  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:33.851422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:33.853882  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:33.981737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:34.343580  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:34.354477  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:34.356009  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:34.482767  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:34.843792  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:34.850635  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:34.853822  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:34.982690  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:35.343572  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:35.351167  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:35.354026  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:35.482112  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:35.844343  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:35.851249  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:35.853903  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:35.982496  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:36.343099  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:36.350616  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:36.355674  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:36.482490  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:36.844286  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:36.850391  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:36.854154  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:36.982171  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:37.342813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:37.351025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:37.353641  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:37.483467  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:37.843615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:37.851730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:37.854361  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:37.982280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:38.343098  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:38.354931  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:38.357227  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:38.485650  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:38.844552  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:38.850808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:38.853913  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:38.982667  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:39.343628  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:39.350732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:39.354071  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:39.482707  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:39.843946  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:39.850924  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:39.854226  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:39.981425  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:40.343226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:40.354174  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:40.356123  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:40.484392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:40.843777  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:40.850726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:40.853881  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:40.981986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:41.345740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:41.350432  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:41.353490  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:41.482252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:41.843746  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:41.852138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:41.854084  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:41.982574  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:42.343901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:42.355267  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:42.356159  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:42.482688  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:42.843642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:42.850721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:42.853620  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:42.983605  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:43.343647  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:43.350491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:43.353379  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:43.482963  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:43.844172  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:43.850934  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:43.853730  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:43.983067  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:44.344302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:44.354746  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:44.358643  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:44.482208  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:44.843055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:44.851194  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:44.853638  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:44.982375  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:45.343641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:45.350783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:45.353729  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:45.482608  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:45.844229  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:45.851340  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:45.853842  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:45.982348  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:46.343814  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:46.353735  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:46.355709  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:46.482653  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:46.844042  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:46.851360  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:46.854047  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:46.981665  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:47.343553  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:47.350905  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:47.354562  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:47.482730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:47.846642  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:47.851617  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:47.854244  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:47.982821  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:48.348755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:48.358993  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:48.362692  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:48.482929  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:48.845986  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:48.855258  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:48.855556  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:48.983374  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:49.345751  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:49.351422  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:49.354407  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:49.483209  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:49.844002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:49.851581  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:49.854537  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:49.982849  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:50.344447  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:50.351881  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:50.357998  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:50.483462  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:50.844077  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:50.852387  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:50.853875  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:50.981864  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:51.344350  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:51.351990  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:51.354121  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:51.482203  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:51.843589  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:51.850812  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:51.854340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:51.982119  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:52.344161  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:52.358205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:52.362823  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:52.482538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:52.843627  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:52.853524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:52.855356  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:52.981926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:53.353335  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:53.357060  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:53.364900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:53.481386  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:53.844774  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:53.852117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:53.854506  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:53.982078  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:54.344388  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:54.355804  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:54.357721  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:54.482392  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:54.843819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:54.851192  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:54.853694  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:54.982749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:55.343794  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:55.350979  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:55.353616  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:55.483659  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:55.844614  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:55.852200  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:55.854994  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:55.981819  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:56.346448  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:56.351372  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:56.354248  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:56.483356  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:56.845373  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:56.851934  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:56.854609  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:56.983840  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:57.343760  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:57.352189  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:57.354722  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:57.482829  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:57.843784  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:57.851939  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:57.854678  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:57.982302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:58.343331  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:58.356187  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:58.356227  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:58.481962  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:58.845817  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:58.851024  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:58.854593  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:58.982321  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:59.343013  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:59.350886  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:59.353814  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:59.482493  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:11:59.843556  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:11:59.851479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:11:59.854058  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:11:59.982870  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:00.344123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:00.353639  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:00.355609  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:00.482010  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:00.843282  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:00.852512  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:00.855192  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:00.982342  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:01.342867  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:01.351741  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:01.354483  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:01.482596  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:01.844131  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:01.851227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:01.853947  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:01.981770  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:02.343808  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:02.352846  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:02.356872  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:02.482612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:02.844117  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:02.852304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:02.854826  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:02.981532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:03.343607  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:03.351346  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:03.354503  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:03.481578  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:03.843630  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:03.852537  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:03.855241  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:03.982489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:04.343253  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:04.355957  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:04.356000  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:04.482551  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:04.843723  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:04.851673  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:04.854690  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:04.982741  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:05.343944  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:05.351018  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:05.354425  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:05.481941  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:05.843998  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:05.851090  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:05.853903  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:05.981730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:06.343458  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:06.350577  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:06.357257  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:06.481878  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:06.844591  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:06.851969  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:06.855088  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:06.982755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:07.343476  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:07.350610  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:07.353725  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:07.482984  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:07.844667  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:07.852857  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:07.854651  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:07.982363  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:08.344130  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:08.360926  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:08.361253  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:08.482461  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:08.843641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:08.852622  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:08.856162  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:08.982726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:09.343974  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:09.352726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:09.354514  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:09.483161  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:09.842515  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:09.852615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:09.854532  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:09.981943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:10.343820  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:10.353935  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:10.359508  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:10.482131  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:10.843191  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:10.851287  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:10.854045  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:10.981749  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:11.343851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:11.351491  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:11.353876  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:11.481536  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:11.843697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:11.851091  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:11.853952  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:11.981676  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:12.343832  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:12.350662  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:12.354456  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:12.482120  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:12.843123  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:12.851925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:12.853997  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:12.981776  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:13.343554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:13.350682  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:13.354017  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:13.481252  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:13.843434  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:13.850762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:13.854081  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:13.981962  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:14.344037  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:14.355093  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:14.358080  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:14.482242  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:14.843737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:14.851260  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:14.854963  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:14.982023  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:15.343016  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:15.359111  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:15.359232  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:15.482466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:15.843445  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:15.850828  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:15.853919  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:15.981793  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:16.343722  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:16.352285  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:16.358548  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:16.482453  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:16.843337  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:16.851710  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:16.854739  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:16.983493  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:17.343638  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:17.350687  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:17.354062  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:17.482055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:17.846055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:17.854345  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:17.854486  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:17.982762  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:18.346507  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:18.351621  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:18.359590  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:18.482333  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:18.844243  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:18.853317  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:18.855132  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:18.982361  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:19.342822  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:19.354162  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:19.357958  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:19.482021  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:19.843121  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:19.851473  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:19.853715  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:19.982431  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:20.343566  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:20.351680  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:20.355057  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:20.481636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:20.843907  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:20.853211  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:20.855206  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:20.982090  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:21.343550  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:21.351073  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:21.353575  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:21.482636  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:21.845980  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:21.853611  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:21.853820  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:21.982872  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:22.344878  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:22.351278  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:22.358963  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:22.482017  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:22.849403  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:22.862108  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:22.862343  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:22.982454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:23.343543  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:23.350892  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:23.354605  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:23.482755  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:23.844430  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:23.854799  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:23.855692  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:23.983034  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:24.344241  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:24.351616  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:24.357842  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:24.482379  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:24.843926  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:24.851730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:24.854235  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:24.982431  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:25.344095  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:25.351193  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:25.354079  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:25.482296  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:25.843468  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:25.850791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:25.854182  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:25.982331  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:26.343495  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:26.350113  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:26.355295  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:26.482122  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:26.842901  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:26.851624  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:26.854399  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:26.981993  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:27.343991  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:27.350850  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:27.354305  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:27.482000  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:27.844897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:27.851532  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:27.854405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:27.982671  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:28.343600  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:28.354783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:28.360157  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:28.482505  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:28.843416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:28.850855  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:28.853731  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:28.982671  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:29.344150  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:29.353768  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:29.354550  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:29.482086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:29.843348  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:29.850791  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:29.854362  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:29.981960  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:30.344009  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:30.352387  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:30.354079  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:30.482691  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:30.843995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:30.851302  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:30.853835  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:30.982426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:31.343489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:31.351226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:31.354722  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:31.482783  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:31.843721  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:31.850712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:31.854288  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:31.982331  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:32.343280  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:32.351227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:32.357673  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:32.482526  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:32.843439  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:32.851751  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:32.854658  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:32.982674  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:33.344087  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:33.351265  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:33.354052  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:33.481455  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:33.844196  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:33.851730  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:33.854418  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:33.982322  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:34.343838  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:34.350515  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:34.354710  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:34.482648  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:34.843943  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:34.851506  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:34.854246  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:34.983128  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:35.344137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:35.351761  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:35.354535  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:35.482025  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:35.844107  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:35.853426  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:35.855785  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:35.982538  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:36.344259  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:36.351304  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:36.354731  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:36.482489  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:36.843479  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:36.851754  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:36.854028  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:36.981877  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:37.343702  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:37.350794  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:37.353657  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:37.482615  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:37.845236  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:37.851022  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:37.853421  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:37.982324  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:38.343204  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:38.352086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:38.355656  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:38.482605  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:38.844000  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:38.851826  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:38.854246  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:38.982026  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:39.343383  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:39.352138  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:39.355217  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:39.482462  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:39.845818  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:39.851951  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:39.855291  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:39.981907  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:40.344023  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:40.354805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:40.360060  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:40.481826  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:40.843843  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:40.851159  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:40.853600  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:40.982584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:41.344933  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:41.352640  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:41.355412  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:41.482272  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:41.847080  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:41.852043  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:41.855074  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:41.982055  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:42.342851  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:42.351158  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:42.354593  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:42.482287  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:42.843416  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:42.850633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:42.854340  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:42.981955  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:43.344437  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:43.351085  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:43.354313  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:43.481177  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:43.844506  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:43.851975  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:43.855697  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:43.982199  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:44.344031  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:44.352697  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:44.357728  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:44.483554  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:44.842897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:44.850902  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:44.853840  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:44.983769  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:45.344437  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:45.351247  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:45.353894  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:45.482205  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:45.843524  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:45.859218  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:45.859458  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:45.982295  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:46.343160  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:46.351110  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:46.353975  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:46.481920  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:46.844122  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:46.851403  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:46.856519  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:46.982291  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:47.343612  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:47.350466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:47.353692  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:47.482564  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:47.843871  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:47.852363  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:47.858283  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:47.982219  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:48.345002  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:48.351177  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:48.354510  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:48.481922  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:48.846681  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:48.855143  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 00:12:48.857039  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:48.981674  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:49.343267  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:49.351133  503585 kapi.go:107] duration metric: took 5m41.504647458s to wait for kubernetes.io/minikube-addons=registry ...
	I0730 00:12:49.353583  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:49.481443  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:49.843740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:49.854674  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:49.982737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:50.345136  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:50.356018  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:50.483463  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:50.844726  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:50.855104  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:50.981699  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:51.345158  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:51.354955  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:51.482641  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:51.844967  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:51.855044  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:51.981925  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:52.344466  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:52.355266  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:52.482394  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:52.843594  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:52.854851  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:52.982627  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:53.343623  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:53.354542  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:53.481737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:53.844813  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:53.855659  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:53.982897  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:54.343737  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:54.354962  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:54.482550  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:54.846732  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:54.855535  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:54.982424  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:55.343668  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:55.354927  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:55.482597  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:55.845740  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:55.854750  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:55.982889  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:56.343633  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:56.360464  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:56.482535  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:56.844293  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:56.855588  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:56.982711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:57.344303  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:57.355118  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:57.482339  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:57.843329  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:57.855334  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:57.982086  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:58.344669  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:58.355085  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:58.483805  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:58.843789  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:58.854255  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:59.105568  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:59.345928  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:59.354804  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:59.482324  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:12:59.843226  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:12:59.854905  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:12:59.981323  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:00.343454  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:00.354244  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:00.482374  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:00.843575  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:00.855155  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:00.982399  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:01.342900  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:01.354698  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:01.482495  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:01.845227  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:01.856824  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:01.982470  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:02.343584  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:02.355371  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:02.482037  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:02.843044  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:02.855962  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:02.981527  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:03.344263  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:03.354177  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:03.481183  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:03.846675  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:03.856029  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:03.981670  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:04.343459  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:04.355810  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:04.482132  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:04.843712  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:04.856569  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:04.981866  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:05.344318  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:05.355562  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:05.481995  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:05.844937  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:05.855405  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:05.982141  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:06.733907  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:06.736015  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:06.740803  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:06.843365  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:06.855122  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:06.981711  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:07.344196  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:07.356255  503585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 00:13:07.482582  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:07.843278  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:07.850616  503585 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=ingress-nginx" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0730 00:13:07.850647  503585 kapi.go:107] duration metric: took 6m0.000251922s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0730 00:13:07.850808  503585 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0730 00:13:07.982176  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:08.343270  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 00:13:08.778514  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:08.838506  503585 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0730 00:13:08.838538  503585 kapi.go:107] duration metric: took 6m0.00051547s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0730 00:13:08.838624  503585 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0730 00:13:08.990160  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:09.482137  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:09.981860  503585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 00:13:10.479071  503585 kapi.go:107] duration metric: took 6m0.000775326s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0730 00:13:10.479209  503585 out.go:239] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0730 00:13:10.481105  503585 out.go:177] * Enabled addons: nvidia-device-plugin, metrics-server, storage-provisioner, helm-tiller, ingress-dns, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry
	I0730 00:13:10.482442  503585 addons.go:510] duration metric: took 6m11.141560313s for enable addons: enabled=[nvidia-device-plugin metrics-server storage-provisioner helm-tiller ingress-dns inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry]
	I0730 00:13:10.482488  503585 start.go:246] waiting for cluster config update ...
	I0730 00:13:10.482517  503585 start.go:255] writing updated cluster config ...
	I0730 00:13:10.482810  503585 ssh_runner.go:195] Run: rm -f paused
	I0730 00:13:10.556870  503585 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0730 00:13:10.558756  503585 out.go:177] * Done! kubectl is now configured to use "addons-091578" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.077472678Z" level=debug msg="Container or sandbox exited: f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62" file="server/server.go:810"
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.077493887Z" level=debug msg="sandbox infra exited and found: f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62" file="server/server.go:825"
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.076269272Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62.RILQR2\"" file="server/server.go:805"
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.090219526Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Verbose:false,}" file="otel-collector/interceptors.go:62" id=6eb29641-c3b9-4c1e-a995-a8d256fc83cc name=/runtime.v1.RuntimeService/PodSandboxStatus
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.090456352Z" level=debug msg="Unmounted container f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62" file="storage/runtime.go:495" id=92025ef4-4461-45a3-b832-8ff69286afc0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.092736483Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-4z28f,Uid:8efac445-c550-499b-9e0a-05b83969bc15,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722298025235548416,Network:&PodSandboxNetworkStatus{Ip:10.244.0.9,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:07:04.609443646Z,kubernetes.io/config.sou
rce: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=6eb29641-c3b9-4c1e-a995-a8d256fc83cc name=/runtime.v1.RuntimeService/PodSandboxStatus
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.093536121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},},}" file="otel-collector/interceptors.go:62" id=e7350470-dc1a-4c20-92b3-6be15a2a4068 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.093652014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7350470-dc1a-4c20-92b3-6be15a2a4068 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.093822209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,PodSandboxId:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1722298165858870583,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},Annotations:map[string]string{io.kubernetes.container.hash: 32a1acc4,io.kubern
etes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7350470-dc1a-4c20-92b3-6be15a2a4068 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.094270172Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=196034e3-caf1-4985-a3d2-c56daeb1fd87 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.094447985Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1722298165901911358,StartedAt:1722298165925229433,FinishedAt:1722298812905886146,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},Annotations:map[string]string{io.kubernetes.container.hash: 32a1acc4
,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/volumes/kubernetes.io~empty-dir/tmp-dir,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/containers/metrics-server/6cce570d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_P
RIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/volumes/kubernetes.io~projected/kube-api-access-v4cwx,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-4z28f_8efac445-c550-499b-9e0a-05b83969bc15/metrics-server/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:948,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=196034e3-caf1-4985-a3d2-c56daeb1fd87 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.100598121Z" level=debug msg="Found exit code for f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62: 0" file="oci/runtime_oci.go:1022"
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.100772000Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.name:POD io.kubernetes.cri-o.Annotations:{\"kubernetes.io/config.seen\":\"2024-07-30T00:07:04.609443646Z\",\"kubernetes.io/config.source\":\"api\"} io.kubernetes.cri-o.CNIResult:{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"fa:b3:0e:50:ff:9c\"},{\"name\":\"veth8bf05768\",\"mac\":\"be:13:f9:01:c6:da\"},{\"name\":\"eth0\",\"mac\":\"e6:35:49:2f:1b:06\",\"sandbox\":\"/var/run/netns/d107d5a6-e357-431d-a821-f68c6dc9fbce\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.9/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244
.0.1\"}],\"dns\":{}} io.kubernetes.cri-o.CgroupParent:/kubepods/burstable/pod8efac445-c550-499b-9e0a-05b83969bc15 io.kubernetes.cri-o.ContainerID:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62 io.kubernetes.cri-o.ContainerName:k8s_POD_metrics-server-c59844bb4-4z28f_kube-system_8efac445-c550-499b-9e0a-05b83969bc15_0 io.kubernetes.cri-o.ContainerType:sandbox io.kubernetes.cri-o.Created:2024-07-30T00:07:05.235548416Z io.kubernetes.cri-o.HostName:metrics-server-c59844bb4-4z28f io.kubernetes.cri-o.HostNetwork:false io.kubernetes.cri-o.HostnamePath:/var/run/containers/storage/overlay-containers/f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62/userdata/hostname io.kubernetes.cri-o.Image:registry.k8s.io/pause:3.9 io.kubernetes.cri-o.ImageName:registry.k8s.io/pause:3.9 io.kubernetes.cri-o.KubeName:metrics-server-c59844bb4-4z28f io.kubernetes.cri-o.Labels:{\"pod-template-hash\":\"c59844bb4\",\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"metrics-server\",\"io.kubernetes.pod.uid
\":\"8efac445-c550-499b-9e0a-05b83969bc15\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-4z28f\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-4z28f_8efac445-c550-499b-9e0a-05b83969bc15/f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server-c59844bb4-4z28f\",\"uid\":\"8efac445-c550-499b-9e0a-05b83969bc15\",\"namespace\":\"kube-system\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/7254fe27731c55ac256d01fd6aa14849098c1da983a14a14eea6bfb7a6fdab17/merged io.kubernetes.cri-o.Name:k8s_metrics-server-c59844bb4-4z28f_kube-system_8efac445-c550-499b-9e0a-05b83969bc15_0 io.kubernetes.cri-o.Namespace:kube-system io.kubernetes.cri-o.NamespaceOptions:{\"pid\":1} io.kubernetes.cri-o.PodLinuxOverhead:{} io.kubernetes.cri-o.PodLinuxResources:{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}} io.kubernetes.cri-o.P
ortMappings:[] io.kubernetes.cri-o.PrivilegedRuntime:false io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62/userdata/resolv.conf io.kubernetes.cri-o.RuntimeHandler: io.kubernetes.cri-o.SandboxID:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62 io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-4z28f_kube-system_8efac445-c550-499b-9e0a-05b83969bc15_0 io.kubernetes.cri-o.SeccompProfilePath:RuntimeDefault io.kubernetes.cri-o.ShmPath:/var/run/containers/storage/overlay-containers/f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62/userdata/shm io.kubernetes.pod.name:metrics-server-c59844bb4-4z28f io.kubernetes.pod.namespace:kube-system io.kubernetes.pod.uid:8efac445-c550-499b-9e0a-05b83969bc15 k8s-app:metrics-server kubernetes.io/config.seen:2024-07-30T00:07:04.609443646Z kubernetes.io/config.source:api pod-template-hash:c59844bb4]} Created:2024-07-30 00:07:06.695763182 +0000 UTC St
arted:2024-07-30 00:07:06.893778895 +0000 UTC m=+36.222209090 Finished:2024-07-30 00:20:13.067896245 +0000 UTC ExitCode:0xc002944be0 OOMKilled:false SeccompKilled:false Error: InitPid:2942 InitStartTime:5826 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=92025ef4-4461-45a3-b832-8ff69286afc0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.105415287Z" level=info msg="Stopped pod sandbox: f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62" file="server/sandbox_stop_linux.go:91" id=92025ef4-4461-45a3-b832-8ff69286afc0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.105619285Z" level=debug msg="Response: &StopPodSandboxResponse{}" file="otel-collector/interceptors.go:74" id=92025ef4-4461-45a3-b832-8ff69286afc0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.112036517Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},},}" file="otel-collector/interceptors.go:62" id=462dce4e-06b8-47b6-a9be-862551eb892a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.112190599Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-4z28f,Uid:8efac445-c550-499b-9e0a-05b83969bc15,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722298025235548416,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:07:04.609443646Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=462dce4e-06b8-47b6-a9be-862551eb892a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.113545308Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Verbose:false,}" file="otel-collector/interceptors.go:62" id=62382b68-a768-4ab5-ab5f-ed9b28e9689f name=/runtime.v1.RuntimeService/PodSandboxStatus
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.113666731Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-4z28f,Uid:8efac445-c550-499b-9e0a-05b83969bc15,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722298025235548416,Network:&PodSandboxNetworkStatus{Ip:10.244.0.9,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:07:04.609443646Z,kubernetes.io/config.
source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=62382b68-a768-4ab5-ab5f-ed9b28e9689f name=/runtime.v1.RuntimeService/PodSandboxStatus
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.113846991Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62\"" file="server/server.go:805"
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.114580837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},},}" file="otel-collector/interceptors.go:62" id=d60543d5-2bae-45ca-92e4-acf9ddb488ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.114652990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d60543d5-2bae-45ca-92e4-acf9ddb488ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.114758307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,PodSandboxId:f5db33d35758e592776ca9301bccabe82665ac4005c7da4bf6dc14093dfd7a62,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1722298165858870583,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},Annotations:map[string]string{io.kubernetes.container.hash: 32a1acc4,io.kubern
etes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d60543d5-2bae-45ca-92e4-acf9ddb488ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.116421240Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=209d335c-7a30-423b-9716-87899f538c09 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 30 00:20:13 addons-091578 crio[684]: time="2024-07-30 00:20:13.116592664Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6a2d08297396a5a961d18120712ca2d71c72062e1f1ec5618e7385f378434df4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1722298165901911358,StartedAt:1722298165925229433,FinishedAt:1722298812905886146,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-4z28f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efac445-c550-499b-9e0a-05b83969bc15,},Annotations:map[string]string{io.kubernetes.container.hash: 32a1acc4
,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/volumes/kubernetes.io~empty-dir/tmp-dir,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/containers/metrics-server/6cce570d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_P
RIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8efac445-c550-499b-9e0a-05b83969bc15/volumes/kubernetes.io~projected/kube-api-access-v4cwx,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-4z28f_8efac445-c550-499b-9e0a-05b83969bc15/metrics-server/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:948,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=209d335c-7a30-423b-9716-87899f538c09 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b96d7e33fdc40       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                        3 minutes ago       Running             hello-world-app                          0                   cd751fc485d99       hello-world-app-6778b5fc9f-jww7v
	5acfe7b0899fc       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                                              5 minutes ago       Running             nginx                                    0                   0e4a566afd9f1       nginx
	96b1aa7bbf2f4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   20f4bfe4e0b14       busybox
	0846abe114994       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	3954537c5ae22       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 6 minutes ago       Running             gcp-auth                                 0                   f698839a71d77       gcp-auth-5db96cd9b4-5cxwj
	46fd5acdb4fdc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	401664d5c9e46       registry.k8s.io/ingress-nginx/controller@sha256:9818b69f6e49fcc5a284c875f1e58dac1a73486b0dff869a0609145871d752a9                             6 minutes ago       Running             controller                               0                   b54973bbbaaa2       ingress-nginx-controller-6d9bd977d4-mf6vz
	c059d666bd522       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	deebfb1ded078       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	bf914abe5295f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	05fb6240570b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   2f6f4b01010d7       csi-hostpathplugin-52djf
	5ff2f9293ebcf       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   ccef286fd0154       csi-hostpath-attacher-0
	5257117f839ff       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   85f3b4e2819fd       csi-hostpath-resizer-0
	77fe42014264d       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                                             7 minutes ago       Exited              patch                                    1                   145ef63475360       ingress-nginx-admission-patch-dzc79
	879fb1de98852       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2                   7 minutes ago       Exited              create                                   0                   6c022fa2981ce       ingress-nginx-admission-create-47xkh
	9d5a9b5472d72       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   8e0b8597cda2c       local-path-provisioner-8d985888d-rqmh5
	6a2d08297396a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872                        10 minutes ago      Exited              metrics-server                           0                   f5db33d35758e       metrics-server-c59844bb4-4z28f
	d2202e7d3177c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             13 minutes ago      Running             storage-provisioner                      0                   3dd991c056b05       storage-provisioner
	f0506da1a2ae3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                                             13 minutes ago      Running             coredns                                  0                   196728114e4d2       coredns-7db6d8ff4d-lznwz
	ca15b02295bfe       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                                             13 minutes ago      Running             kube-proxy                               0                   858fac3db89c5       kube-proxy-4j5tl
	499733049fe68       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                                             13 minutes ago      Running             etcd                                     0                   9d2fd4ffdd8e1       etcd-addons-091578
	3ee890a84b948       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                                             13 minutes ago      Running             kube-scheduler                           0                   f64ee108124b6       kube-scheduler-addons-091578
	60041ecdf7b4c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                                             13 minutes ago      Running             kube-controller-manager                  0                   c75cb106ac5f4       kube-controller-manager-addons-091578
	cdb96aea78f76       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                                             13 minutes ago      Running             kube-apiserver                           0                   48b1f8b800ebc       kube-apiserver-addons-091578
	
	
	==> coredns [f0506da1a2ae338f032c61fd719193f35e430184a8a34c22e0b3e3667c498330] <==
	[INFO] 10.244.0.21:58420 - 10289 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00009386s
	[INFO] 10.244.0.21:58420 - 27298 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000086705s
	[INFO] 10.244.0.21:58420 - 2122 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088157s
	[INFO] 10.244.0.21:58420 - 24227 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000097459s
	[INFO] 10.244.0.21:45623 - 691 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069867s
	[INFO] 10.244.0.21:45623 - 27035 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058461s
	[INFO] 10.244.0.21:45623 - 13984 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050919s
	[INFO] 10.244.0.21:45623 - 2331 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063824s
	[INFO] 10.244.0.21:45623 - 37366 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076882s
	[INFO] 10.244.0.21:45623 - 62021 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042342s
	[INFO] 10.244.0.21:45623 - 32625 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000085425s
	[INFO] 10.244.0.21:39764 - 28154 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000456422s
	[INFO] 10.244.0.21:39764 - 8773 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000131267s
	[INFO] 10.244.0.21:39764 - 42298 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113298s
	[INFO] 10.244.0.21:39764 - 56727 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000095197s
	[INFO] 10.244.0.21:39764 - 62841 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000132499s
	[INFO] 10.244.0.21:39764 - 64581 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082802s
	[INFO] 10.244.0.21:39286 - 45943 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062617s
	[INFO] 10.244.0.21:39764 - 54218 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000181427s
	[INFO] 10.244.0.21:39286 - 10945 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101173s
	[INFO] 10.244.0.21:39286 - 25677 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000095222s
	[INFO] 10.244.0.21:39286 - 42788 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028435s
	[INFO] 10.244.0.21:39286 - 16136 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029987s
	[INFO] 10.244.0.21:39286 - 28683 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026933s
	[INFO] 10.244.0.21:39286 - 11406 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040396s
	
	
	==> describe nodes <==
	Name:               addons-091578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-091578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=addons-091578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_06_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-091578
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-091578"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:06:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-091578
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:20:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:17:17 +0000   Tue, 30 Jul 2024 00:06:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:17:17 +0000   Tue, 30 Jul 2024 00:06:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:17:17 +0000   Tue, 30 Jul 2024 00:06:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:17:17 +0000   Tue, 30 Jul 2024 00:06:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-091578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2175b4f65f5b42f89841ba61d88b3014
	  System UUID:                2175b4f6-5f5b-42f8-9841-ba61d88b3014
	  Boot ID:                    ff39aba3-5037-47b0-bfbc-125a8399a9e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  default                     hello-world-app-6778b5fc9f-jww7v             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  gcp-auth                    gcp-auth-5db96cd9b4-5cxwj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-mf6vz    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-lznwz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     13m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 csi-hostpathplugin-52djf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-addons-091578                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-addons-091578                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-addons-091578        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-4j5tl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-addons-091578                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  local-path-storage          local-path-provisioner-8d985888d-rqmh5       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-091578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-091578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-091578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-091578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-091578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-091578 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-091578 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-091578 event: Registered Node addons-091578 in Controller
	
	
	==> dmesg <==
	[ +15.293820] systemd-fstab-generator[1495]: Ignoring "noauto" option for root device
	[  +0.149691] kauditd_printk_skb: 21 callbacks suppressed
	[Jul30 00:07] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.313374] kauditd_printk_skb: 158 callbacks suppressed
	[Jul30 00:09] kauditd_printk_skb: 43 callbacks suppressed
	[Jul30 00:10] kauditd_printk_skb: 4 callbacks suppressed
	[Jul30 00:12] kauditd_printk_skb: 2 callbacks suppressed
	[Jul30 00:13] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.913028] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.005842] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.505645] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.711362] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.063260] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.908418] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.252850] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.986225] kauditd_printk_skb: 17 callbacks suppressed
	[Jul30 00:14] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.009539] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.004222] kauditd_printk_skb: 59 callbacks suppressed
	[  +9.652505] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.135475] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.809767] kauditd_printk_skb: 34 callbacks suppressed
	[ +12.956189] kauditd_printk_skb: 7 callbacks suppressed
	[Jul30 00:16] kauditd_printk_skb: 14 callbacks suppressed
	[Jul30 00:20] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [499733049fe68f09d38efbceba703de44dbf48ee44b25f63dc749f2f0aa5d8f9] <==
	{"level":"warn","ts":"2024-07-30T00:13:29.845108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:13:29.52951Z","time spent":"315.535664ms","remote":"127.0.0.1:53868","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-5gszhrclg236pq3nh3xxg2ls24\" mod_revision:1389 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-5gszhrclg236pq3nh3xxg2ls24\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-5gszhrclg236pq3nh3xxg2ls24\" > >"}
	{"level":"info","ts":"2024-07-30T00:13:29.845152Z","caller":"traceutil/trace.go:171","msg":"trace[1970816166] linearizableReadLoop","detail":"{readStateIndex:1549; appliedIndex:1549; }","duration":"190.890476ms","start":"2024-07-30T00:13:29.654247Z","end":"2024-07-30T00:13:29.845138Z","steps":["trace[1970816166] 'read index received'  (duration: 190.881847ms)","trace[1970816166] 'applied index is now lower than readState.Index'  (duration: 7.174µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T00:13:29.846005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.741718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T00:13:29.84608Z","caller":"traceutil/trace.go:171","msg":"trace[1427918091] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1450; }","duration":"191.850091ms","start":"2024-07-30T00:13:29.654222Z","end":"2024-07-30T00:13:29.846072Z","steps":["trace[1427918091] 'agreement among raft nodes before linearized reading'  (duration: 190.965308ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:13:30.039968Z","caller":"traceutil/trace.go:171","msg":"trace[977606159] transaction","detail":"{read_only:false; response_revision:1452; number_of_response:1; }","duration":"191.018514ms","start":"2024-07-30T00:13:29.848931Z","end":"2024-07-30T00:13:30.039949Z","steps":["trace[977606159] 'process raft request'  (duration: 190.301985ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:13:30.040183Z","caller":"traceutil/trace.go:171","msg":"trace[1145085867] transaction","detail":"{read_only:false; response_revision:1451; number_of_response:1; }","duration":"350.836892ms","start":"2024-07-30T00:13:29.689307Z","end":"2024-07-30T00:13:30.040144Z","steps":["trace[1145085867] 'process raft request'  (duration: 348.385941ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:13:30.040265Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:13:29.689291Z","time spent":"350.933459ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-6d9bd977d4-mf6vz.17e6d5474a973613\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-6d9bd977d4-mf6vz.17e6d5474a973613\" value_size:675 lease:6697856504141342883 >> failure:<>"}
	{"level":"info","ts":"2024-07-30T00:13:42.125724Z","caller":"traceutil/trace.go:171","msg":"trace[946108652] transaction","detail":"{read_only:false; response_revision:1513; number_of_response:1; }","duration":"193.623085ms","start":"2024-07-30T00:13:41.932074Z","end":"2024-07-30T00:13:42.125697Z","steps":["trace[946108652] 'process raft request'  (duration: 193.463048ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:14:19.012088Z","caller":"traceutil/trace.go:171","msg":"trace[894129127] linearizableReadLoop","detail":"{readStateIndex:1881; appliedIndex:1880; }","duration":"406.90416ms","start":"2024-07-30T00:14:18.605151Z","end":"2024-07-30T00:14:19.012055Z","steps":["trace[894129127] 'read index received'  (duration: 400.787649ms)","trace[894129127] 'applied index is now lower than readState.Index'  (duration: 6.115497ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T00:14:19.012521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.926006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5997"}
	{"level":"info","ts":"2024-07-30T00:14:19.012592Z","caller":"traceutil/trace.go:171","msg":"trace[1862980982] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1765; }","duration":"158.02703ms","start":"2024-07-30T00:14:18.854554Z","end":"2024-07-30T00:14:19.012581Z","steps":["trace[1862980982] 'agreement among raft nodes before linearized reading'  (duration: 157.879296ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:14:19.01251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"407.149224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-07-30T00:14:19.012738Z","caller":"traceutil/trace.go:171","msg":"trace[939603819] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1765; }","duration":"407.609043ms","start":"2024-07-30T00:14:18.605118Z","end":"2024-07-30T00:14:19.012727Z","steps":["trace[939603819] 'agreement among raft nodes before linearized reading'  (duration: 407.032932ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:14:19.012786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:14:18.605105Z","time spent":"407.663983ms","remote":"127.0.0.1:53868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-07-30T00:14:24.113237Z","caller":"traceutil/trace.go:171","msg":"trace[221579905] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1903; }","duration":"256.806987ms","start":"2024-07-30T00:14:23.856417Z","end":"2024-07-30T00:14:24.113224Z","steps":["trace[221579905] 'read index received'  (duration: 256.680755ms)","trace[221579905] 'applied index is now lower than readState.Index'  (duration: 125.816µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-30T00:14:24.113498Z","caller":"traceutil/trace.go:171","msg":"trace[250618291] transaction","detail":"{read_only:false; response_revision:1787; number_of_response:1; }","duration":"272.788921ms","start":"2024-07-30T00:14:23.840699Z","end":"2024-07-30T00:14:24.113488Z","steps":["trace[250618291] 'process raft request'  (duration: 272.438756ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:14:24.113721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.258173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5997"}
	{"level":"info","ts":"2024-07-30T00:14:24.113744Z","caller":"traceutil/trace.go:171","msg":"trace[1898224654] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1787; }","duration":"257.346306ms","start":"2024-07-30T00:14:23.856391Z","end":"2024-07-30T00:14:24.113737Z","steps":["trace[1898224654] 'agreement among raft nodes before linearized reading'  (duration: 257.214377ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:14:36.704624Z","caller":"traceutil/trace.go:171","msg":"trace[1690797861] transaction","detail":"{read_only:false; response_revision:1898; number_of_response:1; }","duration":"227.590844ms","start":"2024-07-30T00:14:36.477018Z","end":"2024-07-30T00:14:36.704609Z","steps":["trace[1690797861] 'process raft request'  (duration: 227.192188ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:14:36.705065Z","caller":"traceutil/trace.go:171","msg":"trace[1770232593] linearizableReadLoop","detail":"{readStateIndex:2020; appliedIndex:2019; }","duration":"175.344969ms","start":"2024-07-30T00:14:36.528955Z","end":"2024-07-30T00:14:36.7043Z","steps":["trace[1770232593] 'read index received'  (duration: 175.184409ms)","trace[1770232593] 'applied index is now lower than readState.Index'  (duration: 160.068µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T00:14:36.705521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.563453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-07-30T00:14:36.709044Z","caller":"traceutil/trace.go:171","msg":"trace[237935070] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1898; }","duration":"180.116607ms","start":"2024-07-30T00:14:36.528916Z","end":"2024-07-30T00:14:36.709033Z","steps":["trace[237935070] 'agreement among raft nodes before linearized reading'  (duration: 176.507125ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:16:40.322924Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2024-07-30T00:16:40.395557Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1175,"took":"71.897778ms","hash":3698398867,"current-db-size-bytes":8245248,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":5062656,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-07-30T00:16:40.395746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3698398867,"revision":1175,"compact-revision":-1}
	
	
	==> kernel <==
	 00:20:13 up 14 min,  0 users,  load average: 0.40, 0.44, 0.34
	Linux addons-091578 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cdb96aea78f76e05a3efb5795ce94c82bc3c82ed6f08f64de828bc449f926363] <==
	W0730 00:13:10.612956       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.108.163.99:443: connect: connection refused
	E0730 00:13:10.613024       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.108.163.99:443: connect: connection refused
	E0730 00:13:54.036847       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:41834: use of closed network connection
	E0730 00:13:54.228609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:41870: use of closed network connection
	I0730 00:14:13.025810       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.52.11"}
	E0730 00:14:28.235032       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.214:8443->10.244.0.30:38110: read: connection reset by peer
	I0730 00:14:31.923092       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0730 00:14:32.124704       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0730 00:14:32.348719       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.128.150"}
	I0730 00:14:36.760074       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0730 00:14:37.826623       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0730 00:14:51.436527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.436575       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.465266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.465451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.486108       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.486170       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.492568       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.492612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 00:14:51.524091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 00:14:51.528640       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0730 00:14:52.493461       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0730 00:14:52.525076       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0730 00:14:52.533979       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0730 00:16:52.910849       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.59.30"}
	
	
	==> kube-controller-manager [60041ecdf7b4c221d042f0e95879444d1e09e348795f9dafa22300d85bab0952] <==
	W0730 00:18:16.348816       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:18:16.348947       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:18:24.422659       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:18:24.422857       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:18:31.112978       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:18:31.113030       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:18:44.238118       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:18:44.238230       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:19:01.663271       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:19:01.663344       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:19:02.560253       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:19:02.560399       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:19:20.071523       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:19:20.071636       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:19:26.748127       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:19:26.748205       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:19:41.257514       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:19:41.257708       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:19:43.268983       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:19:43.269035       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:20:06.854420       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:20:06.854499       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 00:20:11.108496       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 00:20:11.108646       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0730 00:20:11.767648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="11.82µs"
	
	
	==> kube-proxy [ca15b02295bfe75eb4bfc15856210ed71cab5bc2547baf6c3939f2e89a67896d] <==
	I0730 00:07:00.639704       1 server_linux.go:69] "Using iptables proxy"
	I0730 00:07:00.665725       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	I0730 00:07:00.783987       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:07:00.784032       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:07:00.784048       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:07:00.787713       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:07:00.787904       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:07:00.787915       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:07:00.792350       1 config.go:192] "Starting service config controller"
	I0730 00:07:00.792363       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:07:00.792380       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:07:00.792383       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:07:00.792690       1 config.go:319] "Starting node config controller"
	I0730 00:07:00.792702       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:07:00.893248       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:07:00.893275       1 shared_informer.go:320] Caches are synced for service config
	I0730 00:07:00.893292       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3ee890a84b948a033c908466b218a55b45b71d30c578b28f0dada264d23dc568] <==
	W0730 00:06:41.824164       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 00:06:41.824198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0730 00:06:41.824180       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:41.824284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:41.824252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 00:06:41.824367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 00:06:42.701691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:42.701736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:42.707828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 00:06:42.707885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0730 00:06:42.901708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:06:42.901751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:06:42.950945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0730 00:06:42.951172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0730 00:06:42.959704       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 00:06:42.959750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 00:06:42.976566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:06:42.976679       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:06:43.003719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 00:06:43.003764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 00:06:43.049616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 00:06:43.049656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 00:06:43.109249       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:06:43.109289       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0730 00:06:45.213391       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 00:16:54 addons-091578 kubelet[1266]: I0730 00:16:54.024465    1266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da"} err="failed to get container status \"57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da\": rpc error: code = NotFound desc = could not find container \"57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da\": container with ID starting with 57b34cd588d156821cdf3aa353865a20eb650e9df2724fff2f4eb895954938da not found: ID does not exist"
	Jul 30 00:16:54 addons-091578 kubelet[1266]: I0730 00:16:54.059351    1266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jd56g\" (UniqueName: \"kubernetes.io/projected/7057a5f6-2896-4f06-9824-0772c339905f-kube-api-access-jd56g\") on node \"addons-091578\" DevicePath \"\""
	Jul 30 00:16:54 addons-091578 kubelet[1266]: I0730 00:16:54.339264    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7057a5f6-2896-4f06-9824-0772c339905f" path="/var/lib/kubelet/pods/7057a5f6-2896-4f06-9824-0772c339905f/volumes"
	Jul 30 00:17:44 addons-091578 kubelet[1266]: E0730 00:17:44.346486    1266 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:17:44 addons-091578 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:17:44 addons-091578 kubelet[1266]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:17:44 addons-091578 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:17:44 addons-091578 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:18:44 addons-091578 kubelet[1266]: E0730 00:18:44.340963    1266 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:18:44 addons-091578 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:18:44 addons-091578 kubelet[1266]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:18:44 addons-091578 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:18:44 addons-091578 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:19:44 addons-091578 kubelet[1266]: E0730 00:19:44.340985    1266 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:19:44 addons-091578 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:19:44 addons-091578 kubelet[1266]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:19:44 addons-091578 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:19:44 addons-091578 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:20:11 addons-091578 kubelet[1266]: I0730 00:20:11.797470    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-jww7v" podStartSLOduration=197.548650816 podStartE2EDuration="3m19.797440602s" podCreationTimestamp="2024-07-30 00:16:52 +0000 UTC" firstStartedPulling="2024-07-30 00:16:53.334220627 +0000 UTC m=+609.099693820" lastFinishedPulling="2024-07-30 00:16:55.583010411 +0000 UTC m=+611.348483606" observedRunningTime="2024-07-30 00:16:56.029931159 +0000 UTC m=+611.795404372" watchObservedRunningTime="2024-07-30 00:20:11.797440602 +0000 UTC m=+807.562913815"
	Jul 30 00:20:13 addons-091578 kubelet[1266]: I0730 00:20:13.141540    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8efac445-c550-499b-9e0a-05b83969bc15-tmp-dir\") pod \"8efac445-c550-499b-9e0a-05b83969bc15\" (UID: \"8efac445-c550-499b-9e0a-05b83969bc15\") "
	Jul 30 00:20:13 addons-091578 kubelet[1266]: I0730 00:20:13.141611    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4cwx\" (UniqueName: \"kubernetes.io/projected/8efac445-c550-499b-9e0a-05b83969bc15-kube-api-access-v4cwx\") pod \"8efac445-c550-499b-9e0a-05b83969bc15\" (UID: \"8efac445-c550-499b-9e0a-05b83969bc15\") "
	Jul 30 00:20:13 addons-091578 kubelet[1266]: I0730 00:20:13.145644    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efac445-c550-499b-9e0a-05b83969bc15-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "8efac445-c550-499b-9e0a-05b83969bc15" (UID: "8efac445-c550-499b-9e0a-05b83969bc15"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 30 00:20:13 addons-091578 kubelet[1266]: I0730 00:20:13.153621    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efac445-c550-499b-9e0a-05b83969bc15-kube-api-access-v4cwx" (OuterVolumeSpecName: "kube-api-access-v4cwx") pod "8efac445-c550-499b-9e0a-05b83969bc15" (UID: "8efac445-c550-499b-9e0a-05b83969bc15"). InnerVolumeSpecName "kube-api-access-v4cwx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 30 00:20:13 addons-091578 kubelet[1266]: I0730 00:20:13.242551    1266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v4cwx\" (UniqueName: \"kubernetes.io/projected/8efac445-c550-499b-9e0a-05b83969bc15-kube-api-access-v4cwx\") on node \"addons-091578\" DevicePath \"\""
	Jul 30 00:20:13 addons-091578 kubelet[1266]: I0730 00:20:13.242594    1266 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8efac445-c550-499b-9e0a-05b83969bc15-tmp-dir\") on node \"addons-091578\" DevicePath \"\""
	
	
	==> storage-provisioner [d2202e7d3177cd2f406f08d30a641c7902683d37774fd6d7b1c0dd6c6894d0d5] <==
	I0730 00:07:05.781118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 00:07:06.276859       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 00:07:06.276929       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0730 00:07:06.320535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0730 00:07:06.320699       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-091578_ebc3b303-5950-46c1-89f3-8f9726695c90!
	I0730 00:07:06.320741       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5173fa91-320a-41ce-b9af-0e8c9dc5a9ac", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-091578_ebc3b303-5950-46c1-89f3-8f9726695c90 became leader
	I0730 00:07:06.637388       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-091578_ebc3b303-5950-46c1-89f3-8f9726695c90!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-091578 -n addons-091578
helpers_test.go:261: (dbg) Run:  kubectl --context addons-091578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-091578 describe pod ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-091578 describe pod ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79: exit status 1 (61.510458ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-47xkh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dzc79" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-091578 describe pod ingress-nginx-admission-create-47xkh ingress-nginx-admission-patch-dzc79: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (364.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-091578
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-091578: exit status 82 (2m0.459468346s)

                                                
                                                
-- stdout --
	* Stopping node "addons-091578"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-091578" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-091578
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-091578: exit status 11 (21.488899055s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-091578" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-091578
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-091578: exit status 11 (6.142888206s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-091578" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-091578
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-091578: exit status 11 (6.143135609s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-091578" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-844183 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-ckf2n" [2676f801-5a85-4482-a5a2-138a2ec7e615] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1795: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1795: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-844183 -n functional-844183
functional_test.go:1795: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-07-30 00:36:25.233621636 +0000 UTC m=+1940.003223893
functional_test.go:1795: (dbg) Run:  kubectl --context functional-844183 describe po mysql-64454c8b5c-ckf2n -n default
functional_test.go:1795: (dbg) kubectl --context functional-844183 describe po mysql-64454c8b5c-ckf2n -n default:
Name:             mysql-64454c8b5c-ckf2n
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-844183/192.168.39.57
Start Time:       Tue, 30 Jul 2024 00:26:24 +0000
Labels:           app=mysql
pod-template-hash=64454c8b5c
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-64454c8b5c
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vss88 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vss88:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-64454c8b5c-ckf2n to functional-844183
functional_test.go:1795: (dbg) Run:  kubectl --context functional-844183 logs mysql-64454c8b5c-ckf2n -n default
functional_test.go:1795: (dbg) Non-zero exit: kubectl --context functional-844183 logs mysql-64454c8b5c-ckf2n -n default: exit status 1 (66.336595ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-64454c8b5c-ckf2n" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test.go:1795: kubectl --context functional-844183 logs mysql-64454c8b5c-ckf2n -n default: exit status 1
functional_test.go:1797: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-844183 -n functional-844183
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 logs -n 25: (1.411569754s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                      Args                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-844183 ssh findmnt                   | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | -T /mount1                                      |                   |         |         |                     |                     |
	| service        | functional-844183 service                       | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC |                     |
	|                | --namespace=default --https                     |                   |         |         |                     |                     |
	|                | --url hello-node                                |                   |         |         |                     |                     |
	| ssh            | functional-844183 ssh findmnt                   | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | -T /mount2                                      |                   |         |         |                     |                     |
	| ssh            | functional-844183 ssh findmnt                   | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | -T /mount3                                      |                   |         |         |                     |                     |
	| service        | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | service hello-node --url                        |                   |         |         |                     |                     |
	|                | --format={{.IP}}                                |                   |         |         |                     |                     |
	| mount          | -p functional-844183                            | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC |                     |
	|                | --kill=true                                     |                   |         |         |                     |                     |
	| ssh            | functional-844183 ssh sudo cat                  | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | /etc/test/nested/copy/502384/hosts              |                   |         |         |                     |                     |
	| service        | functional-844183 service                       | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | hello-node --url                                |                   |         |         |                     |                     |
	| addons         | functional-844183 addons list                   | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	| addons         | functional-844183 addons list                   | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | -o json                                         |                   |         |         |                     |                     |
	| ssh            | functional-844183 ssh echo                      | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | hello                                           |                   |         |         |                     |                     |
	| ssh            | functional-844183 ssh cat                       | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | /etc/hostname                                   |                   |         |         |                     |                     |
	| image          | functional-844183 image ls                      | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	| image          | functional-844183 image save --daemon           | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | docker.io/kicbase/echo-server:functional-844183 |                   |         |         |                     |                     |
	|                | --alsologtostderr                               |                   |         |         |                     |                     |
	| update-context | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | update-context                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                          |                   |         |         |                     |                     |
	| update-context | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | update-context                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                          |                   |         |         |                     |                     |
	| update-context | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | update-context                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                          |                   |         |         |                     |                     |
	| image          | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | image ls --format short                         |                   |         |         |                     |                     |
	|                | --alsologtostderr                               |                   |         |         |                     |                     |
	| image          | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | image ls --format yaml                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                               |                   |         |         |                     |                     |
	| ssh            | functional-844183 ssh pgrep                     | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC |                     |
	|                | buildkitd                                       |                   |         |         |                     |                     |
	| image          | functional-844183 image build -t                | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | localhost/my-image:functional-844183            |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                |                   |         |         |                     |                     |
	| image          | functional-844183 image ls                      | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	| image          | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | image ls --format json                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                               |                   |         |         |                     |                     |
	| image          | functional-844183                               | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | image ls --format table                         |                   |         |         |                     |                     |
	|                | --alsologtostderr                               |                   |         |         |                     |                     |
	| service        | functional-844183 service                       | functional-844183 | jenkins | v1.33.1 | 30 Jul 24 00:26 UTC | 30 Jul 24 00:26 UTC |
	|                | hello-node-connect --url                        |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:26:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:26:11.683699  512476 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:26:11.684218  512476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:26:11.684236  512476 out.go:304] Setting ErrFile to fd 2...
	I0730 00:26:11.684244  512476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:26:11.684461  512476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:26:11.685108  512476 out.go:298] Setting JSON to false
	I0730 00:26:11.686180  512476 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7714,"bootTime":1722291458,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:26:11.686261  512476 start.go:139] virtualization: kvm guest
	I0730 00:26:11.688702  512476 out.go:177] * [functional-844183] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:26:11.690408  512476 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:26:11.690437  512476 notify.go:220] Checking for updates...
	I0730 00:26:11.692977  512476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:26:11.694333  512476 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:26:11.695507  512476 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:26:11.696741  512476 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:26:11.697826  512476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:26:11.699619  512476 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:26:11.700308  512476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:26:11.700356  512476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:26:11.722231  512476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46817
	I0730 00:26:11.723283  512476 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:26:11.724033  512476 main.go:141] libmachine: Using API Version  1
	I0730 00:26:11.724096  512476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:26:11.724545  512476 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:26:11.724807  512476 main.go:141] libmachine: (functional-844183) Calling .DriverName
	I0730 00:26:11.725123  512476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:26:11.725552  512476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:26:11.725589  512476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:26:11.754288  512476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0730 00:26:11.754919  512476 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:26:11.755632  512476 main.go:141] libmachine: Using API Version  1
	I0730 00:26:11.755654  512476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:26:11.756195  512476 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:26:11.756436  512476 main.go:141] libmachine: (functional-844183) Calling .DriverName
	I0730 00:26:11.802213  512476 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 00:26:11.803501  512476 start.go:297] selected driver: kvm2
	I0730 00:26:11.803518  512476 start.go:901] validating driver "kvm2" against &{Name:functional-844183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-844183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:26:11.803657  512476 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:26:11.805042  512476 cni.go:84] Creating CNI manager for ""
	I0730 00:26:11.805067  512476 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:26:11.805150  512476 start.go:340] cluster config:
	{Name:functional-844183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-844183 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:26:11.807477  512476 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.013340198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722299786013315653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:259222,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a77e61d-ce4b-41a5-92a1-7cbf9ee13e83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.014024955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e15b417f-70a0-41fa-9e81-699dd22417c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.014102436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e15b417f-70a0-41fa-9e81-699dd22417c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.014601788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:210048982b468c25b1cbc6548e79956e14f11b881f3638902c23a63a9d3bb04a,PodSandboxId:7f165d4c956918aa984829ebc1e175e277513ac9cc616e530ca4bcd5cb1000a8,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a,State:CONTAINER_RUNNING,CreatedAt:1722299205721582408,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5516c93-260b-44cf-ac2c-58f5f448201a,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc5cc,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ec5be886a3bced70f15edffaa4e83aa7a696c6794a129f1fd2e64a4615e53a,PodSandboxId:ed98b9490bc82d3305329a032db8f1e4f3f4c25e94d6b58e59ca011d2a4cac12,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722299188175827311,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-bvfsl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 40503c9d-9664-4436-97ea-2e564f52b42f,},Annotations:map[string]string{io.kubernetes.container.
hash: 4836a62e,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfecaa773dbe1430443e211fcc22a389fc219022829e347895d823e5b5913eb2,PodSandboxId:95d2524e6fd86080eb6dd45af2b185b35b38cc6c950161b609bf98e5d53372a2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299186118955418,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-b2fr6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d46f017-50a1-4470-ad17-61bbe14c
5e20,},Annotations:map[string]string{io.kubernetes.container.hash: 3310f1d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374085947e8d4a0704892343919e18dada5667bdff9d8baffcaed80991076c91,PodSandboxId:79656e8dd0f2583299efb035da14ad638fbd06958581f99d2a071d0058655284,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722299184799233304,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-x6bn2,io.kubernetes.pod.namespace: kubernetes-dashboar
d,io.kubernetes.pod.uid: 006796ce-38c1-45bb-a09e-635636665a01,},Annotations:map[string]string{io.kubernetes.container.hash: 8bee59df,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99610cdf6634906d1054efef9816756f6c889f4f9f14bede0372c3829a4b9d94,PodSandboxId:0ead7f8a77337bec1e958d76f29a72796851ea1fcee7dabdf5a0f1b686dfa468,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722299177807927792,Labels:map[string]string{io.kubernetes.container.name: mount-mun
ger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b56fa44c-8542-4671-84ec-0f98a9e490b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5aec8318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c15a1ec2b9e949c1d71a13ae8e278cbf6e0be4a3186ee9a432c7e7e6780ba0,PodSandboxId:9a2a8df0ffeb075e5f5ac8f92ecd325449e7fc89cea362be062146d895708619,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299175053833069,Labels:map[string]string{io.kubernetes.container.name: echoserver,io
.kubernetes.pod.name: hello-node-6d85cfcfd8-sqjzk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ac4d09c-be2e-464e-a4b7-60990a45b5ec,},Annotations:map[string]string{io.kubernetes.container.hash: 179ad35c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c3c9f689e1b2c6085127c3197c4bf3f57ac511e4ad9554cfa0cc844ddbe706,PodSandboxId:1a7a10d4fdbc7835b03dab8b01e5f5f05159f13b46d0a2bf886290b0ac33d023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722299143830627004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube
-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:060e1f7498707236b024ed732d80d61fc2ee6e8168ccc6035adb926d86c6f73e,PodSandboxId:8368c562b3ecfca6bf97db8cc21ec6a8662e85eb4f635610f3b09a94ff702acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722299143854944822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c1b8582a1b6f6a08ef336916eb1458272d64fc7ecbe60c9ca4466f1477d160,PodSandboxId:643778bd103468a6e81342dcafccdb55bab38f3ac3476fd1a096792552739613,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299143865649281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a77073b930419a3385e17886607a0182703db97f7277d1f3c84a8c577a27ee,PodSandboxId:3e14ac21a517c7b962be85a9c8eb9f0cc5fe517c91af2ce5f86b61901db227dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36
b095d,State:CONTAINER_RUNNING,CreatedAt:1722299140144133999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b92d60b23d3bc4fef64dc1d39134e98c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c119706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146edd33bda6fe064a8e959abd0412e4e2701475cded8efa292cf32ef0eff1fe,PodSandboxId:0ef7ac6c69f3ea70e7a66158c8b0d5e6d79a71e5d0ec1269e6ca9bd3d2c7854b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5
e,State:CONTAINER_RUNNING,CreatedAt:1722299140000873998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135d7bf139c63450e35c2455c6846ac12bee7ea45ef7acdb0d61db3cf109f6ae,PodSandboxId:20819cc2cdb708ff607a89b445fca422c91fe6176f9b4f100bc4da4cfcd0d796,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_RUNNING,CreatedAt:1722299139984411533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:075804d59c12b34352db676a36d5c975ed065ff9b0b9956870991aec805681ea,PodSandboxId:7174766ecc3f51f81a4fa91551dd0108aff65b4fb1c5758f5c803559b81c2bee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17
22299140003908784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455ab3db72dcabecd9be8279a51e07ae1f07bc6511acf5deddef1accd57d7435,PodSandboxId:f6aec316fcb9425addaaa52f80870ce1f387b0a18b4de1d026a5964de515913c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299100553211495,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51db790f977bdb0f1457be7d11ce7fdc76b3822d56e60a8413c95e2e8bf30d9e,PodSandboxId:9eba7275c6f18e201715ddc7102d451a17012f3fbbd416ba475f8552b57f20b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299100108278587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77bab17669d0006e6e274525de3c406a93fb2681c913ecd62d3a68cf10cea18,PodSandboxId:5a4fe488e9ab84060d9b34fc40e42088c706b320116d3a0e808103f4d8abf873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722299100117927638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe73431c5d43e428693cb468b8365a91b45bbd5bb771184acd2c105a6dbe546f,PodSandboxId:47f26225e4fe373370beceb6d96022264fcc0bbc0deab37d250a26aed0c2c84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299096286084747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a38d212e629552a9c1a5ea83807e2fe42fe91a8aa5bf725b572bcd4d345519,PodSandboxId:ca09692b17e4d7717fe1b59c2bb68a9d1a81c620f1de0665ff0f93e97a0e137e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76
722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722299096279077532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16634231f5c6a2f09a8d96b257dfd5500cba6a9a7c916900475abb430e649f53,PodSandboxId:ab2fa667a7986cc8f1746b84983f60e59871eec14ee8fbf64a4298cba2ab2559,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb13
8c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722299096255955157,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e15b417f-70a0-41fa-9e81-699dd22417c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.053865697Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16d7744d-b3c8-4e19-aeb0-4b5daae15821 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.053941186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16d7744d-b3c8-4e19-aeb0-4b5daae15821 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.055009222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd938b34-5e9f-4372-bf91-e6820139cb2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.055917784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722299786055892618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:259222,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd938b34-5e9f-4372-bf91-e6820139cb2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.056521003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f9019af-869f-47eb-9031-e37ae41d3f22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.056628685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f9019af-869f-47eb-9031-e37ae41d3f22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.056996030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:210048982b468c25b1cbc6548e79956e14f11b881f3638902c23a63a9d3bb04a,PodSandboxId:7f165d4c956918aa984829ebc1e175e277513ac9cc616e530ca4bcd5cb1000a8,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a,State:CONTAINER_RUNNING,CreatedAt:1722299205721582408,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5516c93-260b-44cf-ac2c-58f5f448201a,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc5cc,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ec5be886a3bced70f15edffaa4e83aa7a696c6794a129f1fd2e64a4615e53a,PodSandboxId:ed98b9490bc82d3305329a032db8f1e4f3f4c25e94d6b58e59ca011d2a4cac12,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722299188175827311,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-bvfsl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 40503c9d-9664-4436-97ea-2e564f52b42f,},Annotations:map[string]string{io.kubernetes.container.
hash: 4836a62e,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfecaa773dbe1430443e211fcc22a389fc219022829e347895d823e5b5913eb2,PodSandboxId:95d2524e6fd86080eb6dd45af2b185b35b38cc6c950161b609bf98e5d53372a2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299186118955418,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-b2fr6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d46f017-50a1-4470-ad17-61bbe14c
5e20,},Annotations:map[string]string{io.kubernetes.container.hash: 3310f1d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374085947e8d4a0704892343919e18dada5667bdff9d8baffcaed80991076c91,PodSandboxId:79656e8dd0f2583299efb035da14ad638fbd06958581f99d2a071d0058655284,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722299184799233304,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-x6bn2,io.kubernetes.pod.namespace: kubernetes-dashboar
d,io.kubernetes.pod.uid: 006796ce-38c1-45bb-a09e-635636665a01,},Annotations:map[string]string{io.kubernetes.container.hash: 8bee59df,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99610cdf6634906d1054efef9816756f6c889f4f9f14bede0372c3829a4b9d94,PodSandboxId:0ead7f8a77337bec1e958d76f29a72796851ea1fcee7dabdf5a0f1b686dfa468,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722299177807927792,Labels:map[string]string{io.kubernetes.container.name: mount-mun
ger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b56fa44c-8542-4671-84ec-0f98a9e490b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5aec8318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c15a1ec2b9e949c1d71a13ae8e278cbf6e0be4a3186ee9a432c7e7e6780ba0,PodSandboxId:9a2a8df0ffeb075e5f5ac8f92ecd325449e7fc89cea362be062146d895708619,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299175053833069,Labels:map[string]string{io.kubernetes.container.name: echoserver,io
.kubernetes.pod.name: hello-node-6d85cfcfd8-sqjzk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ac4d09c-be2e-464e-a4b7-60990a45b5ec,},Annotations:map[string]string{io.kubernetes.container.hash: 179ad35c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c3c9f689e1b2c6085127c3197c4bf3f57ac511e4ad9554cfa0cc844ddbe706,PodSandboxId:1a7a10d4fdbc7835b03dab8b01e5f5f05159f13b46d0a2bf886290b0ac33d023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722299143830627004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube
-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:060e1f7498707236b024ed732d80d61fc2ee6e8168ccc6035adb926d86c6f73e,PodSandboxId:8368c562b3ecfca6bf97db8cc21ec6a8662e85eb4f635610f3b09a94ff702acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722299143854944822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c1b8582a1b6f6a08ef336916eb1458272d64fc7ecbe60c9ca4466f1477d160,PodSandboxId:643778bd103468a6e81342dcafccdb55bab38f3ac3476fd1a096792552739613,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299143865649281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a77073b930419a3385e17886607a0182703db97f7277d1f3c84a8c577a27ee,PodSandboxId:3e14ac21a517c7b962be85a9c8eb9f0cc5fe517c91af2ce5f86b61901db227dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36
b095d,State:CONTAINER_RUNNING,CreatedAt:1722299140144133999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b92d60b23d3bc4fef64dc1d39134e98c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c119706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146edd33bda6fe064a8e959abd0412e4e2701475cded8efa292cf32ef0eff1fe,PodSandboxId:0ef7ac6c69f3ea70e7a66158c8b0d5e6d79a71e5d0ec1269e6ca9bd3d2c7854b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5
e,State:CONTAINER_RUNNING,CreatedAt:1722299140000873998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135d7bf139c63450e35c2455c6846ac12bee7ea45ef7acdb0d61db3cf109f6ae,PodSandboxId:20819cc2cdb708ff607a89b445fca422c91fe6176f9b4f100bc4da4cfcd0d796,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_RUNNING,CreatedAt:1722299139984411533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:075804d59c12b34352db676a36d5c975ed065ff9b0b9956870991aec805681ea,PodSandboxId:7174766ecc3f51f81a4fa91551dd0108aff65b4fb1c5758f5c803559b81c2bee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17
22299140003908784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455ab3db72dcabecd9be8279a51e07ae1f07bc6511acf5deddef1accd57d7435,PodSandboxId:f6aec316fcb9425addaaa52f80870ce1f387b0a18b4de1d026a5964de515913c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299100553211495,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51db790f977bdb0f1457be7d11ce7fdc76b3822d56e60a8413c95e2e8bf30d9e,PodSandboxId:9eba7275c6f18e201715ddc7102d451a17012f3fbbd416ba475f8552b57f20b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299100108278587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77bab17669d0006e6e274525de3c406a93fb2681c913ecd62d3a68cf10cea18,PodSandboxId:5a4fe488e9ab84060d9b34fc40e42088c706b320116d3a0e808103f4d8abf873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722299100117927638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe73431c5d43e428693cb468b8365a91b45bbd5bb771184acd2c105a6dbe546f,PodSandboxId:47f26225e4fe373370beceb6d96022264fcc0bbc0deab37d250a26aed0c2c84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299096286084747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a38d212e629552a9c1a5ea83807e2fe42fe91a8aa5bf725b572bcd4d345519,PodSandboxId:ca09692b17e4d7717fe1b59c2bb68a9d1a81c620f1de0665ff0f93e97a0e137e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76
722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722299096279077532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16634231f5c6a2f09a8d96b257dfd5500cba6a9a7c916900475abb430e649f53,PodSandboxId:ab2fa667a7986cc8f1746b84983f60e59871eec14ee8fbf64a4298cba2ab2559,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb13
8c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722299096255955157,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f9019af-869f-47eb-9031-e37ae41d3f22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.092425491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26451f45-62ef-4a7e-8d5b-853dfe0a3a6e name=/runtime.v1.RuntimeService/Version
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.092504500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26451f45-62ef-4a7e-8d5b-853dfe0a3a6e name=/runtime.v1.RuntimeService/Version
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.093820850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1eeca6f2-8c91-4172-8048-9bfb978c4307 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.094502673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722299786094474035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:259222,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1eeca6f2-8c91-4172-8048-9bfb978c4307 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.095011164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7237fbf-7337-451a-afb2-66c1f7c313c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.095083592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7237fbf-7337-451a-afb2-66c1f7c313c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.095457250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:210048982b468c25b1cbc6548e79956e14f11b881f3638902c23a63a9d3bb04a,PodSandboxId:7f165d4c956918aa984829ebc1e175e277513ac9cc616e530ca4bcd5cb1000a8,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a,State:CONTAINER_RUNNING,CreatedAt:1722299205721582408,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5516c93-260b-44cf-ac2c-58f5f448201a,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc5cc,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ec5be886a3bced70f15edffaa4e83aa7a696c6794a129f1fd2e64a4615e53a,PodSandboxId:ed98b9490bc82d3305329a032db8f1e4f3f4c25e94d6b58e59ca011d2a4cac12,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722299188175827311,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-bvfsl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 40503c9d-9664-4436-97ea-2e564f52b42f,},Annotations:map[string]string{io.kubernetes.container.
hash: 4836a62e,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfecaa773dbe1430443e211fcc22a389fc219022829e347895d823e5b5913eb2,PodSandboxId:95d2524e6fd86080eb6dd45af2b185b35b38cc6c950161b609bf98e5d53372a2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299186118955418,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-b2fr6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d46f017-50a1-4470-ad17-61bbe14c
5e20,},Annotations:map[string]string{io.kubernetes.container.hash: 3310f1d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374085947e8d4a0704892343919e18dada5667bdff9d8baffcaed80991076c91,PodSandboxId:79656e8dd0f2583299efb035da14ad638fbd06958581f99d2a071d0058655284,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722299184799233304,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-x6bn2,io.kubernetes.pod.namespace: kubernetes-dashboar
d,io.kubernetes.pod.uid: 006796ce-38c1-45bb-a09e-635636665a01,},Annotations:map[string]string{io.kubernetes.container.hash: 8bee59df,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99610cdf6634906d1054efef9816756f6c889f4f9f14bede0372c3829a4b9d94,PodSandboxId:0ead7f8a77337bec1e958d76f29a72796851ea1fcee7dabdf5a0f1b686dfa468,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722299177807927792,Labels:map[string]string{io.kubernetes.container.name: mount-mun
ger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b56fa44c-8542-4671-84ec-0f98a9e490b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5aec8318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c15a1ec2b9e949c1d71a13ae8e278cbf6e0be4a3186ee9a432c7e7e6780ba0,PodSandboxId:9a2a8df0ffeb075e5f5ac8f92ecd325449e7fc89cea362be062146d895708619,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299175053833069,Labels:map[string]string{io.kubernetes.container.name: echoserver,io
.kubernetes.pod.name: hello-node-6d85cfcfd8-sqjzk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ac4d09c-be2e-464e-a4b7-60990a45b5ec,},Annotations:map[string]string{io.kubernetes.container.hash: 179ad35c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c3c9f689e1b2c6085127c3197c4bf3f57ac511e4ad9554cfa0cc844ddbe706,PodSandboxId:1a7a10d4fdbc7835b03dab8b01e5f5f05159f13b46d0a2bf886290b0ac33d023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722299143830627004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube
-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:060e1f7498707236b024ed732d80d61fc2ee6e8168ccc6035adb926d86c6f73e,PodSandboxId:8368c562b3ecfca6bf97db8cc21ec6a8662e85eb4f635610f3b09a94ff702acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722299143854944822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c1b8582a1b6f6a08ef336916eb1458272d64fc7ecbe60c9ca4466f1477d160,PodSandboxId:643778bd103468a6e81342dcafccdb55bab38f3ac3476fd1a096792552739613,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299143865649281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a77073b930419a3385e17886607a0182703db97f7277d1f3c84a8c577a27ee,PodSandboxId:3e14ac21a517c7b962be85a9c8eb9f0cc5fe517c91af2ce5f86b61901db227dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36
b095d,State:CONTAINER_RUNNING,CreatedAt:1722299140144133999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b92d60b23d3bc4fef64dc1d39134e98c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c119706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146edd33bda6fe064a8e959abd0412e4e2701475cded8efa292cf32ef0eff1fe,PodSandboxId:0ef7ac6c69f3ea70e7a66158c8b0d5e6d79a71e5d0ec1269e6ca9bd3d2c7854b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5
e,State:CONTAINER_RUNNING,CreatedAt:1722299140000873998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135d7bf139c63450e35c2455c6846ac12bee7ea45ef7acdb0d61db3cf109f6ae,PodSandboxId:20819cc2cdb708ff607a89b445fca422c91fe6176f9b4f100bc4da4cfcd0d796,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_RUNNING,CreatedAt:1722299139984411533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:075804d59c12b34352db676a36d5c975ed065ff9b0b9956870991aec805681ea,PodSandboxId:7174766ecc3f51f81a4fa91551dd0108aff65b4fb1c5758f5c803559b81c2bee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17
22299140003908784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455ab3db72dcabecd9be8279a51e07ae1f07bc6511acf5deddef1accd57d7435,PodSandboxId:f6aec316fcb9425addaaa52f80870ce1f387b0a18b4de1d026a5964de515913c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299100553211495,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51db790f977bdb0f1457be7d11ce7fdc76b3822d56e60a8413c95e2e8bf30d9e,PodSandboxId:9eba7275c6f18e201715ddc7102d451a17012f3fbbd416ba475f8552b57f20b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299100108278587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77bab17669d0006e6e274525de3c406a93fb2681c913ecd62d3a68cf10cea18,PodSandboxId:5a4fe488e9ab84060d9b34fc40e42088c706b320116d3a0e808103f4d8abf873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722299100117927638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe73431c5d43e428693cb468b8365a91b45bbd5bb771184acd2c105a6dbe546f,PodSandboxId:47f26225e4fe373370beceb6d96022264fcc0bbc0deab37d250a26aed0c2c84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299096286084747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a38d212e629552a9c1a5ea83807e2fe42fe91a8aa5bf725b572bcd4d345519,PodSandboxId:ca09692b17e4d7717fe1b59c2bb68a9d1a81c620f1de0665ff0f93e97a0e137e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76
722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722299096279077532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16634231f5c6a2f09a8d96b257dfd5500cba6a9a7c916900475abb430e649f53,PodSandboxId:ab2fa667a7986cc8f1746b84983f60e59871eec14ee8fbf64a4298cba2ab2559,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb13
8c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722299096255955157,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7237fbf-7337-451a-afb2-66c1f7c313c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.137043438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d810597-d5c6-400e-9fbe-6a82ce2e1703 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.137128393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d810597-d5c6-400e-9fbe-6a82ce2e1703 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.138499037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87484425-c359-4c36-9c1a-2bc5096023aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.139439219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722299786139412878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:259222,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87484425-c359-4c36-9c1a-2bc5096023aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.140591840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e8a1c78-d995-44bd-8bd9-5e80f805aafb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.140664491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e8a1c78-d995-44bd-8bd9-5e80f805aafb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:36:26 functional-844183 crio[4456]: time="2024-07-30 00:36:26.141022278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:210048982b468c25b1cbc6548e79956e14f11b881f3638902c23a63a9d3bb04a,PodSandboxId:7f165d4c956918aa984829ebc1e175e277513ac9cc616e530ca4bcd5cb1000a8,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a,State:CONTAINER_RUNNING,CreatedAt:1722299205721582408,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5516c93-260b-44cf-ac2c-58f5f448201a,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc5cc,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ec5be886a3bced70f15edffaa4e83aa7a696c6794a129f1fd2e64a4615e53a,PodSandboxId:ed98b9490bc82d3305329a032db8f1e4f3f4c25e94d6b58e59ca011d2a4cac12,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722299188175827311,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-bvfsl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 40503c9d-9664-4436-97ea-2e564f52b42f,},Annotations:map[string]string{io.kubernetes.container.
hash: 4836a62e,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfecaa773dbe1430443e211fcc22a389fc219022829e347895d823e5b5913eb2,PodSandboxId:95d2524e6fd86080eb6dd45af2b185b35b38cc6c950161b609bf98e5d53372a2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299186118955418,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-b2fr6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d46f017-50a1-4470-ad17-61bbe14c
5e20,},Annotations:map[string]string{io.kubernetes.container.hash: 3310f1d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374085947e8d4a0704892343919e18dada5667bdff9d8baffcaed80991076c91,PodSandboxId:79656e8dd0f2583299efb035da14ad638fbd06958581f99d2a071d0058655284,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722299184799233304,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-x6bn2,io.kubernetes.pod.namespace: kubernetes-dashboar
d,io.kubernetes.pod.uid: 006796ce-38c1-45bb-a09e-635636665a01,},Annotations:map[string]string{io.kubernetes.container.hash: 8bee59df,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99610cdf6634906d1054efef9816756f6c889f4f9f14bede0372c3829a4b9d94,PodSandboxId:0ead7f8a77337bec1e958d76f29a72796851ea1fcee7dabdf5a0f1b686dfa468,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722299177807927792,Labels:map[string]string{io.kubernetes.container.name: mount-mun
ger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b56fa44c-8542-4671-84ec-0f98a9e490b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5aec8318,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c15a1ec2b9e949c1d71a13ae8e278cbf6e0be4a3186ee9a432c7e7e6780ba0,PodSandboxId:9a2a8df0ffeb075e5f5ac8f92ecd325449e7fc89cea362be062146d895708619,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722299175053833069,Labels:map[string]string{io.kubernetes.container.name: echoserver,io
.kubernetes.pod.name: hello-node-6d85cfcfd8-sqjzk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ac4d09c-be2e-464e-a4b7-60990a45b5ec,},Annotations:map[string]string{io.kubernetes.container.hash: 179ad35c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3c3c9f689e1b2c6085127c3197c4bf3f57ac511e4ad9554cfa0cc844ddbe706,PodSandboxId:1a7a10d4fdbc7835b03dab8b01e5f5f05159f13b46d0a2bf886290b0ac33d023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722299143830627004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube
-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:060e1f7498707236b024ed732d80d61fc2ee6e8168ccc6035adb926d86c6f73e,PodSandboxId:8368c562b3ecfca6bf97db8cc21ec6a8662e85eb4f635610f3b09a94ff702acb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722299143854944822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c1b8582a1b6f6a08ef336916eb1458272d64fc7ecbe60c9ca4466f1477d160,PodSandboxId:643778bd103468a6e81342dcafccdb55bab38f3ac3476fd1a096792552739613,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299143865649281,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a77073b930419a3385e17886607a0182703db97f7277d1f3c84a8c577a27ee,PodSandboxId:3e14ac21a517c7b962be85a9c8eb9f0cc5fe517c91af2ce5f86b61901db227dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36
b095d,State:CONTAINER_RUNNING,CreatedAt:1722299140144133999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b92d60b23d3bc4fef64dc1d39134e98c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c119706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146edd33bda6fe064a8e959abd0412e4e2701475cded8efa292cf32ef0eff1fe,PodSandboxId:0ef7ac6c69f3ea70e7a66158c8b0d5e6d79a71e5d0ec1269e6ca9bd3d2c7854b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5
e,State:CONTAINER_RUNNING,CreatedAt:1722299140000873998,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135d7bf139c63450e35c2455c6846ac12bee7ea45ef7acdb0d61db3cf109f6ae,PodSandboxId:20819cc2cdb708ff607a89b445fca422c91fe6176f9b4f100bc4da4cfcd0d796,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Sta
te:CONTAINER_RUNNING,CreatedAt:1722299139984411533,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:075804d59c12b34352db676a36d5c975ed065ff9b0b9956870991aec805681ea,PodSandboxId:7174766ecc3f51f81a4fa91551dd0108aff65b4fb1c5758f5c803559b81c2bee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17
22299140003908784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455ab3db72dcabecd9be8279a51e07ae1f07bc6511acf5deddef1accd57d7435,PodSandboxId:f6aec316fcb9425addaaa52f80870ce1f387b0a18b4de1d026a5964de515913c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299100553211495,Lab
els:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7pbzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabb1326-21ad-46f1-bf97-d21cb960b23b,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee3f21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51db790f977bdb0f1457be7d11ce7fdc76b3822d56e60a8413c95e2e8bf30d9e,PodSandboxId:9eba7275c6f18e201715ddc7102d451a17012f3fbbd416ba475f8552b57f20b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299100108278587,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mzkmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43466c0c-c47b-4ac7-978a-f89ff9cb805a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a238704,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77bab17669d0006e6e274525de3c406a93fb2681c913ecd62d3a68cf10cea18,PodSandboxId:5a4fe488e9ab84060d9b34fc40e42088c706b320116d3a0e808103f4d8abf873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722299100117927638,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 000534bf-3966-4eda-8ffa-62739142ff82,},Annotations:map[string]string{io.kubernetes.container.hash: 3c047b1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe73431c5d43e428693cb468b8365a91b45bbd5bb771184acd2c105a6dbe546f,PodSandboxId:47f26225e4fe373370beceb6d96022264fcc0bbc0deab37d250a26aed0c2c84b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299096286084747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 223757e62ed4c0013c16d3dc58b4cb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 3a508891,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6a38d212e629552a9c1a5ea83807e2fe42fe91a8aa5bf725b572bcd4d345519,PodSandboxId:ca09692b17e4d7717fe1b59c2bb68a9d1a81c620f1de0665ff0f93e97a0e137e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76
722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722299096279077532,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e26ffe0d7f5b35f2d328f2c12b89d0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16634231f5c6a2f09a8d96b257dfd5500cba6a9a7c916900475abb430e649f53,PodSandboxId:ab2fa667a7986cc8f1746b84983f60e59871eec14ee8fbf64a4298cba2ab2559,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb13
8c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722299096255955157,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-844183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: affacfaf3fd0c387fee6011051c44ca7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e8a1c78-d995-44bd-8bd9-5e80f805aafb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	210048982b468       docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                  9 minutes ago       Running             myfrontend                  0                   7f165d4c95691       sp-pod
	f7ec5be886a3b       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   ed98b9490bc82       dashboard-metrics-scraper-b5fc48f67-bvfsl
	bfecaa773dbe1       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 10 minutes ago      Running             echoserver                  0                   95d2524e6fd86       hello-node-connect-57b4589c47-b2fr6
	374085947e8d4       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   79656e8dd0f25       kubernetes-dashboard-779776cb65-x6bn2
	99610cdf66349       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   0ead7f8a77337       busybox-mount
	11c15a1ec2b9e       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   9a2a8df0ffeb0       hello-node-6d85cfcfd8-sqjzk
	f5c1b8582a1b6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 10 minutes ago      Running             coredns                     2                   643778bd10346       coredns-7db6d8ff4d-7pbzf
	060e1f7498707       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   8368c562b3ecf       storage-provisioner
	f3c3c9f689e1b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                 10 minutes ago      Running             kube-proxy                  2                   1a7a10d4fdbc7       kube-proxy-mzkmb
	d6a77073b9304       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                 10 minutes ago      Running             kube-apiserver              0                   3e14ac21a517c       kube-apiserver-functional-844183
	075804d59c12b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                 10 minutes ago      Running             kube-scheduler              2                   7174766ecc3f5       kube-scheduler-functional-844183
	146edd33bda6f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                 10 minutes ago      Running             kube-controller-manager     2                   0ef7ac6c69f3e       kube-controller-manager-functional-844183
	135d7bf139c63       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 10 minutes ago      Running             etcd                        2                   20819cc2cdb70       etcd-functional-844183
	455ab3db72dca       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 11 minutes ago      Exited              coredns                     1                   f6aec316fcb94       coredns-7db6d8ff4d-7pbzf
	b77bab17669d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   5a4fe488e9ab8       storage-provisioner
	51db790f977bd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                 11 minutes ago      Exited              kube-proxy                  1                   9eba7275c6f18       kube-proxy-mzkmb
	fe73431c5d43e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 11 minutes ago      Exited              etcd                        1                   47f26225e4fe3       etcd-functional-844183
	e6a38d212e629       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                 11 minutes ago      Exited              kube-scheduler              1                   ca09692b17e4d       kube-scheduler-functional-844183
	16634231f5c6a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                 11 minutes ago      Exited              kube-controller-manager     1                   ab2fa667a7986       kube-controller-manager-functional-844183
	
	
	==> coredns [455ab3db72dcabecd9be8279a51e07ae1f07bc6511acf5deddef1accd57d7435] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41906 - 37747 "HINFO IN 8645174928280502551.3030607586365331837. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009590705s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f5c1b8582a1b6f6a08ef336916eb1458272d64fc7ecbe60c9ca4466f1477d160] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34566 - 34413 "HINFO IN 2563185491828014483.1075145468294593840. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018864911s
	
	
	==> describe nodes <==
	Name:               functional-844183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-844183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=functional-844183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_24_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:24:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-844183
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:36:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:31:50 +0000   Tue, 30 Jul 2024 00:24:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:31:50 +0000   Tue, 30 Jul 2024 00:24:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:31:50 +0000   Tue, 30 Jul 2024 00:24:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:31:50 +0000   Tue, 30 Jul 2024 00:24:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    functional-844183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c47f53ceb6a54bf89756481cff66ec52
	  System UUID:                c47f53ce-b6a5-4bf8-9756-481cff66ec52
	  Boot ID:                    69b8a9ec-9534-4a4f-9501-6272659cec40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-sqjzk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  default                     hello-node-connect-57b4589c47-b2fr6          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  default                     mysql-64454c8b5c-ckf2n                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    10m
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 coredns-7db6d8ff4d-7pbzf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     11m
	  kube-system                 etcd-functional-844183                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kube-apiserver-functional-844183             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-functional-844183    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-mzkmb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-functional-844183             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-bvfsl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-x6bn2        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-844183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-844183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-844183 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-844183 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-844183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-844183 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-844183 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node functional-844183 event: Registered Node functional-844183 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-844183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-844183 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-844183 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-844183 event: Registered Node functional-844183 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-844183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-844183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-844183 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-844183 event: Registered Node functional-844183 in Controller
	
	
	==> dmesg <==
	[  +0.180381] systemd-fstab-generator[2450]: Ignoring "noauto" option for root device
	[  +0.148579] systemd-fstab-generator[2462]: Ignoring "noauto" option for root device
	[  +0.266899] systemd-fstab-generator[2490]: Ignoring "noauto" option for root device
	[  +0.697736] systemd-fstab-generator[2644]: Ignoring "noauto" option for root device
	[  +2.597943] systemd-fstab-generator[3008]: Ignoring "noauto" option for root device
	[  +4.346446] kauditd_printk_skb: 205 callbacks suppressed
	[Jul30 00:25] kauditd_printk_skb: 21 callbacks suppressed
	[  +3.520568] systemd-fstab-generator[3577]: Ignoring "noauto" option for root device
	[ +19.914401] systemd-fstab-generator[4374]: Ignoring "noauto" option for root device
	[  +0.072664] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.068449] systemd-fstab-generator[4386]: Ignoring "noauto" option for root device
	[  +0.164264] systemd-fstab-generator[4400]: Ignoring "noauto" option for root device
	[  +0.135564] systemd-fstab-generator[4412]: Ignoring "noauto" option for root device
	[  +0.244563] systemd-fstab-generator[4440]: Ignoring "noauto" option for root device
	[  +1.187104] systemd-fstab-generator[4904]: Ignoring "noauto" option for root device
	[  +1.618188] systemd-fstab-generator[5037]: Ignoring "noauto" option for root device
	[  +4.302354] kauditd_printk_skb: 231 callbacks suppressed
	[ +11.765975] kauditd_printk_skb: 10 callbacks suppressed
	[Jul30 00:26] systemd-fstab-generator[5578]: Ignoring "noauto" option for root device
	[  +5.168789] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.397090] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.321706] kauditd_printk_skb: 39 callbacks suppressed
	[  +8.439930] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.337457] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.538643] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [135d7bf139c63450e35c2455c6846ac12bee7ea45ef7acdb0d61db3cf109f6ae] <==
	{"level":"info","ts":"2024-07-30T00:25:41.417358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 4"}
	{"level":"info","ts":"2024-07-30T00:25:41.422514Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:functional-844183 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T00:25:41.422724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:25:41.423016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:25:41.42459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-30T00:25:41.425574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T00:25:41.425621Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T00:25:41.42623Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-07-30T00:26:23.99484Z","caller":"traceutil/trace.go:171","msg":"trace[1814753277] linearizableReadLoop","detail":"{readStateIndex:828; appliedIndex:827; }","duration":"433.538035ms","start":"2024-07-30T00:26:23.561269Z","end":"2024-07-30T00:26:23.994807Z","steps":["trace[1814753277] 'read index received'  (duration: 433.37097ms)","trace[1814753277] 'applied index is now lower than readState.Index'  (duration: 166.322µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-30T00:26:23.995208Z","caller":"traceutil/trace.go:171","msg":"trace[1709345789] transaction","detail":"{read_only:false; response_revision:765; number_of_response:1; }","duration":"481.884089ms","start":"2024-07-30T00:26:23.513307Z","end":"2024-07-30T00:26:23.995191Z","steps":["trace[1709345789] 'process raft request'  (duration: 481.27747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:26:23.995648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.806745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T00:26:23.995733Z","caller":"traceutil/trace.go:171","msg":"trace[1972722231] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:766; }","duration":"292.962915ms","start":"2024-07-30T00:26:23.70276Z","end":"2024-07-30T00:26:23.995723Z","steps":["trace[1972722231] 'agreement among raft nodes before linearized reading'  (duration: 292.739523ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:26:23.995881Z","caller":"traceutil/trace.go:171","msg":"trace[245076853] transaction","detail":"{read_only:false; response_revision:766; number_of_response:1; }","duration":"294.860498ms","start":"2024-07-30T00:26:23.701015Z","end":"2024-07-30T00:26:23.995876Z","steps":["trace[245076853] 'process raft request'  (duration: 294.414947ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:26:23.996397Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.942569ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/\" range_end:\"/registry/services/specs/default0\" ","response":"range_response_count:2 size:1376"}
	{"level":"info","ts":"2024-07-30T00:26:23.996447Z","caller":"traceutil/trace.go:171","msg":"trace[152559508] range","detail":"{range_begin:/registry/services/specs/default/; range_end:/registry/services/specs/default0; response_count:2; response_revision:766; }","duration":"255.035228ms","start":"2024-07-30T00:26:23.741396Z","end":"2024-07-30T00:26:23.996431Z","steps":["trace[152559508] 'agreement among raft nodes before linearized reading'  (duration: 254.956392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:26:23.997906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"436.632668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-07-30T00:26:23.99793Z","caller":"traceutil/trace.go:171","msg":"trace[836191326] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:766; }","duration":"436.703031ms","start":"2024-07-30T00:26:23.561221Z","end":"2024-07-30T00:26:23.997924Z","steps":["trace[836191326] 'agreement among raft nodes before linearized reading'  (duration: 436.63165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:26:23.997952Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:26:23.561208Z","time spent":"436.735324ms","remote":"127.0.0.1:47950","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-30T00:26:23.998772Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T00:26:23.51329Z","time spent":"481.976711ms","remote":"127.0.0.1:48054","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mj4dyvamodw5553m74jzaf622i\" mod_revision:681 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mj4dyvamodw5553m74jzaf622i\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mj4dyvamodw5553m74jzaf622i\" > >"}
	{"level":"info","ts":"2024-07-30T00:26:36.348851Z","caller":"traceutil/trace.go:171","msg":"trace[617887668] transaction","detail":"{read_only:false; response_revision:853; number_of_response:1; }","duration":"271.074591ms","start":"2024-07-30T00:26:36.077762Z","end":"2024-07-30T00:26:36.348837Z","steps":["trace[617887668] 'process raft request'  (duration: 270.985203ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:26:42.500225Z","caller":"traceutil/trace.go:171","msg":"trace[308535446] transaction","detail":"{read_only:false; response_revision:860; number_of_response:1; }","duration":"114.140021ms","start":"2024-07-30T00:26:42.386069Z","end":"2024-07-30T00:26:42.500209Z","steps":["trace[308535446] 'process raft request'  (duration: 113.991652ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:27:06.779302Z","caller":"traceutil/trace.go:171","msg":"trace[1485511801] transaction","detail":"{read_only:false; response_revision:897; number_of_response:1; }","duration":"163.157521ms","start":"2024-07-30T00:27:06.616119Z","end":"2024-07-30T00:27:06.779276Z","steps":["trace[1485511801] 'process raft request'  (duration: 163.044311ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T00:35:41.461497Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1067}
	{"level":"info","ts":"2024-07-30T00:35:41.489131Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1067,"took":"27.174243ms","hash":2562559613,"current-db-size-bytes":3956736,"current-db-size":"4.0 MB","current-db-size-in-use-bytes":1478656,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-30T00:35:41.489206Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2562559613,"revision":1067,"compact-revision":-1}
	
	
	==> etcd [fe73431c5d43e428693cb468b8365a91b45bbd5bb771184acd2c105a6dbe546f] <==
	{"level":"info","ts":"2024-07-30T00:24:56.651752Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-30T00:24:57.815019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-30T00:24:57.815085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-30T00:24:57.81513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgPreVoteResp from 79ee2fa200dbf73d at term 2"}
	{"level":"info","ts":"2024-07-30T00:24:57.815148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became candidate at term 3"}
	{"level":"info","ts":"2024-07-30T00:24:57.815154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-07-30T00:24:57.815162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d became leader at term 3"}
	{"level":"info","ts":"2024-07-30T00:24:57.815169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 3"}
	{"level":"info","ts":"2024-07-30T00:24:57.82162Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"79ee2fa200dbf73d","local-member-attributes":"{Name:functional-844183 ClientURLs:[https://192.168.39.57:2379]}","request-path":"/0/members/79ee2fa200dbf73d/attributes","cluster-id":"cdb6bc6ece496785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T00:24:57.821673Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:24:57.822084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:24:57.823711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.57:2379"}
	{"level":"info","ts":"2024-07-30T00:24:57.824614Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T00:24:57.824641Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T00:24:57.826214Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-30T00:25:29.710824Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-30T00:25:29.710953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-844183","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	{"level":"warn","ts":"2024-07-30T00:25:29.711048Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T00:25:29.711152Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T00:25:29.753447Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T00:25:29.753666Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-30T00:25:29.753735Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"79ee2fa200dbf73d"}
	{"level":"info","ts":"2024-07-30T00:25:29.758453Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-30T00:25:29.758735Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-07-30T00:25:29.758839Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-844183","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> kernel <==
	 00:36:26 up 12 min,  0 users,  load average: 0.33, 0.21, 0.15
	Linux functional-844183 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d6a77073b930419a3385e17886607a0182703db97f7277d1f3c84a8c577a27ee] <==
	I0730 00:25:42.787977       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 00:25:42.788948       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 00:25:42.789077       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 00:25:42.789130       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 00:25:42.789232       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 00:25:42.793216       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0730 00:25:42.804883       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0730 00:25:43.625335       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0730 00:25:44.234205       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0730 00:25:44.249468       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0730 00:25:44.288949       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0730 00:25:44.322112       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0730 00:25:44.328181       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0730 00:25:55.343450       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0730 00:25:55.593202       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 00:26:05.666413       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.45.193"}
	I0730 00:26:09.940509       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0730 00:26:10.062424       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.35.125"}
	I0730 00:26:13.664309       1 controller.go:615] quota admission added evaluator for: namespaces
	I0730 00:26:14.327083       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.8.22"}
	I0730 00:26:14.379372       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.204.209"}
	I0730 00:26:24.886826       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.101.220"}
	I0730 00:26:25.647708       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.145.179"}
	E0730 00:26:42.996145       1 conn.go:339] Error on socket receive: read tcp 192.168.39.57:8441->192.168.39.1:56498: use of closed network connection
	E0730 00:26:51.441468       1 conn.go:339] Error on socket receive: read tcp 192.168.39.57:8441->192.168.39.1:36854: use of closed network connection
	
	
	==> kube-controller-manager [146edd33bda6fe064a8e959abd0412e4e2701475cded8efa292cf32ef0eff1fe] <==
	E0730 00:26:13.969328       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0730 00:26:13.988010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="45.800399ms"
	E0730 00:26:13.988046       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0730 00:26:14.063922       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="94.021839ms"
	I0730 00:26:14.124110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="136.029356ms"
	I0730 00:26:14.158926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="94.952211ms"
	I0730 00:26:14.159279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="175.312µs"
	I0730 00:26:14.180084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="55.727402ms"
	I0730 00:26:14.180244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="82.189µs"
	I0730 00:26:14.262390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="17.798µs"
	I0730 00:26:15.929797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="9.54142ms"
	I0730 00:26:15.930140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="186.389µs"
	I0730 00:26:24.978119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="28.303662ms"
	I0730 00:26:25.013743       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="35.51728ms"
	I0730 00:26:25.013828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="45.398µs"
	I0730 00:26:25.068964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="20.969857ms"
	I0730 00:26:25.070672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="43.828µs"
	I0730 00:26:25.600775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="40.863735ms"
	I0730 00:26:25.617931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="16.962603ms"
	I0730 00:26:25.618110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="91.501µs"
	I0730 00:26:25.618228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="23.393µs"
	I0730 00:26:27.038000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="19.662241ms"
	I0730 00:26:27.038341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="101.077µs"
	I0730 00:26:29.044276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="10.28855ms"
	I0730 00:26:29.044359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="42.807µs"
	
	
	==> kube-controller-manager [16634231f5c6a2f09a8d96b257dfd5500cba6a9a7c916900475abb430e649f53] <==
	I0730 00:25:12.332222       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0730 00:25:12.332240       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0730 00:25:12.332246       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0730 00:25:12.334696       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0730 00:25:12.334783       1 shared_informer.go:320] Caches are synced for PV protection
	I0730 00:25:12.337215       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0730 00:25:12.337431       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0730 00:25:12.340247       1 shared_informer.go:320] Caches are synced for ephemeral
	I0730 00:25:12.342725       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0730 00:25:12.361791       1 shared_informer.go:320] Caches are synced for TTL
	I0730 00:25:12.364899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.280787ms"
	I0730 00:25:12.365474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.243µs"
	I0730 00:25:12.371497       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0730 00:25:12.371597       1 shared_informer.go:320] Caches are synced for attach detach
	I0730 00:25:12.373455       1 shared_informer.go:320] Caches are synced for daemon sets
	I0730 00:25:12.376001       1 shared_informer.go:320] Caches are synced for crt configmap
	I0730 00:25:12.473778       1 shared_informer.go:320] Caches are synced for taint
	I0730 00:25:12.474025       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0730 00:25:12.474106       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-844183"
	I0730 00:25:12.474156       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0730 00:25:12.527872       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 00:25:12.545828       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 00:25:12.979918       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 00:25:13.022302       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 00:25:13.022364       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [51db790f977bdb0f1457be7d11ce7fdc76b3822d56e60a8413c95e2e8bf30d9e] <==
	I0730 00:25:00.518618       1 server_linux.go:69] "Using iptables proxy"
	I0730 00:25:00.529444       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0730 00:25:00.648941       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:25:00.648988       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:25:00.649005       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:25:00.653505       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:25:00.654094       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:25:00.654112       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:25:00.655512       1 config.go:192] "Starting service config controller"
	I0730 00:25:00.655522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:25:00.655598       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:25:00.655603       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:25:00.669556       1 config.go:319] "Starting node config controller"
	I0730 00:25:00.669683       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:25:00.756697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 00:25:00.756800       1 shared_informer.go:320] Caches are synced for service config
	I0730 00:25:00.773054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f3c3c9f689e1b2c6085127c3197c4bf3f57ac511e4ad9554cfa0cc844ddbe706] <==
	I0730 00:25:44.083412       1 server_linux.go:69] "Using iptables proxy"
	I0730 00:25:44.095684       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0730 00:25:44.142125       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:25:44.142160       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:25:44.142176       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:25:44.146055       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:25:44.146306       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:25:44.146319       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:25:44.147160       1 config.go:319] "Starting node config controller"
	I0730 00:25:44.147189       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:25:44.147378       1 config.go:192] "Starting service config controller"
	I0730 00:25:44.147390       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:25:44.147405       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:25:44.147408       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:25:44.247730       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 00:25:44.247791       1 shared_informer.go:320] Caches are synced for service config
	I0730 00:25:44.248038       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [075804d59c12b34352db676a36d5c975ed065ff9b0b9956870991aec805681ea] <==
	I0730 00:25:41.058831       1 serving.go:380] Generated self-signed cert in-memory
	W0730 00:25:42.648450       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0730 00:25:42.648511       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:25:42.648520       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0730 00:25:42.648526       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0730 00:25:42.696014       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0730 00:25:42.696051       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:25:42.705361       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0730 00:25:42.705443       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0730 00:25:42.706999       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0730 00:25:42.706981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0730 00:25:42.808329       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e6a38d212e629552a9c1a5ea83807e2fe42fe91a8aa5bf725b572bcd4d345519] <==
	I0730 00:24:56.797883       1 serving.go:380] Generated self-signed cert in-memory
	W0730 00:24:59.085050       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0730 00:24:59.085090       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:24:59.085104       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0730 00:24:59.085109       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0730 00:24:59.116561       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0730 00:24:59.116589       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:24:59.126678       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0730 00:24:59.126711       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0730 00:24:59.129155       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0730 00:24:59.129263       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0730 00:24:59.228693       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0730 00:25:29.724764       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0730 00:25:29.725096       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0730 00:25:29.725274       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 30 00:31:39 functional-844183 kubelet[5044]: E0730 00:31:39.608959    5044 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:31:39 functional-844183 kubelet[5044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:31:39 functional-844183 kubelet[5044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:31:39 functional-844183 kubelet[5044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:31:39 functional-844183 kubelet[5044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:32:39 functional-844183 kubelet[5044]: E0730 00:32:39.611167    5044 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:32:39 functional-844183 kubelet[5044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:32:39 functional-844183 kubelet[5044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:32:39 functional-844183 kubelet[5044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:32:39 functional-844183 kubelet[5044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:33:39 functional-844183 kubelet[5044]: E0730 00:33:39.608899    5044 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:33:39 functional-844183 kubelet[5044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:33:39 functional-844183 kubelet[5044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:33:39 functional-844183 kubelet[5044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:33:39 functional-844183 kubelet[5044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:34:39 functional-844183 kubelet[5044]: E0730 00:34:39.609520    5044 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:34:39 functional-844183 kubelet[5044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:34:39 functional-844183 kubelet[5044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:34:39 functional-844183 kubelet[5044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:34:39 functional-844183 kubelet[5044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:35:39 functional-844183 kubelet[5044]: E0730 00:35:39.612415    5044 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:35:39 functional-844183 kubelet[5044]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:35:39 functional-844183 kubelet[5044]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:35:39 functional-844183 kubelet[5044]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:35:39 functional-844183 kubelet[5044]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [374085947e8d4a0704892343919e18dada5667bdff9d8baffcaed80991076c91] <==
	2024/07/30 00:26:24 Starting overwatch
	2024/07/30 00:26:24 Using namespace: kubernetes-dashboard
	2024/07/30 00:26:24 Using in-cluster config to connect to apiserver
	2024/07/30 00:26:24 Using secret token for csrf signing
	2024/07/30 00:26:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/30 00:26:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/30 00:26:25 Successful initial request to the apiserver, version: v1.30.3
	2024/07/30 00:26:25 Generating JWE encryption key
	2024/07/30 00:26:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/30 00:26:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/30 00:26:25 Initializing JWE encryption key from synchronized object
	2024/07/30 00:26:25 Creating in-cluster Sidecar client
	2024/07/30 00:26:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/30 00:26:25 Serving insecurely on HTTP port: 9090
	2024/07/30 00:26:55 Successful request to sidecar
	
	
	==> storage-provisioner [060e1f7498707236b024ed732d80d61fc2ee6e8168ccc6035adb926d86c6f73e] <==
	I0730 00:25:43.980571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 00:25:44.028136       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 00:25:44.028248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0730 00:26:01.437693       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0730 00:26:01.437836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-844183_03855dd1-ecfa-4142-92e8-785867ca1d5d!
	I0730 00:26:01.438798       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5705696e-170c-4450-b295-807276a64903", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-844183_03855dd1-ecfa-4142-92e8-785867ca1d5d became leader
	I0730 00:26:01.537951       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-844183_03855dd1-ecfa-4142-92e8-785867ca1d5d!
	I0730 00:26:29.659572       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0730 00:26:29.659757       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ca2971a1-e68e-4816-8dfe-21890b22ed0f 383 0 2024-07-30 00:24:39 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-30 00:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-812afe8b-b3da-4365-b9bb-0341281f4353 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  812afe8b-b3da-4365-b9bb-0341281f4353 833 0 2024-07-30 00:26:29 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-30 00:26:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-30 00:26:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0730 00:26:29.660626       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-812afe8b-b3da-4365-b9bb-0341281f4353" provisioned
	I0730 00:26:29.660722       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0730 00:26:29.660748       1 volume_store.go:212] Trying to save persistentvolume "pvc-812afe8b-b3da-4365-b9bb-0341281f4353"
	I0730 00:26:29.662188       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"812afe8b-b3da-4365-b9bb-0341281f4353", APIVersion:"v1", ResourceVersion:"833", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0730 00:26:29.703157       1 volume_store.go:219] persistentvolume "pvc-812afe8b-b3da-4365-b9bb-0341281f4353" saved
	I0730 00:26:29.703496       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"812afe8b-b3da-4365-b9bb-0341281f4353", APIVersion:"v1", ResourceVersion:"833", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-812afe8b-b3da-4365-b9bb-0341281f4353
	
	
	==> storage-provisioner [b77bab17669d0006e6e274525de3c406a93fb2681c913ecd62d3a68cf10cea18] <==
	I0730 00:25:00.381286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 00:25:00.396981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 00:25:00.397055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0730 00:25:17.801683       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0730 00:25:17.801835       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-844183_4e07d067-e1ef-4f4b-bf7f-f32e561a9c26!
	I0730 00:25:17.804125       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5705696e-170c-4450-b295-807276a64903", APIVersion:"v1", ResourceVersion:"523", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-844183_4e07d067-e1ef-4f4b-bf7f-f32e561a9c26 became leader
	I0730 00:25:17.902600       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-844183_4e07d067-e1ef-4f4b-bf7f-f32e561a9c26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-844183 -n functional-844183
helpers_test.go:261: (dbg) Run:  kubectl --context functional-844183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-64454c8b5c-ckf2n
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-844183 describe pod busybox-mount mysql-64454c8b5c-ckf2n
helpers_test.go:282: (dbg) kubectl --context functional-844183 describe pod busybox-mount mysql-64454c8b5c-ckf2n:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-844183/192.168.39.57
	Start Time:       Tue, 30 Jul 2024 00:26:12 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://99610cdf6634906d1054efef9816756f6c889f4f9f14bede0372c3829a4b9d94
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 30 Jul 2024 00:26:17 +0000
	      Finished:     Tue, 30 Jul 2024 00:26:17 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdw95 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-cdw95:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-844183
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.716s (2.716s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-64454c8b5c-ckf2n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-844183/192.168.39.57
	Start Time:       Tue, 30 Jul 2024 00:26:24 +0000
	Labels:           app=mysql
	                  pod-template-hash=64454c8b5c
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-64454c8b5c
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vss88 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vss88:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/mysql-64454c8b5c-ckf2n to functional-844183

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 node stop m02 -v=7 --alsologtostderr
E0730 00:41:20.324820  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:41:30.565553  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:41:51.046221  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:42:32.006833  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.479659523s)

                                                
                                                
-- stdout --
	* Stopping node "ha-161305-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:41:15.833046  520828 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:41:15.833155  520828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:41:15.833162  520828 out.go:304] Setting ErrFile to fd 2...
	I0730 00:41:15.833167  520828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:41:15.833386  520828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:41:15.833687  520828 mustload.go:65] Loading cluster: ha-161305
	I0730 00:41:15.834108  520828 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:41:15.834128  520828 stop.go:39] StopHost: ha-161305-m02
	I0730 00:41:15.834517  520828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:41:15.834570  520828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:41:15.850709  520828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I0730 00:41:15.851200  520828 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:41:15.852022  520828 main.go:141] libmachine: Using API Version  1
	I0730 00:41:15.852057  520828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:41:15.852432  520828 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:41:15.855325  520828 out.go:177] * Stopping node "ha-161305-m02"  ...
	I0730 00:41:15.856555  520828 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0730 00:41:15.856603  520828 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:41:15.856914  520828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0730 00:41:15.856951  520828 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:41:15.860027  520828 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:41:15.860477  520828 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:41:15.860507  520828 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:41:15.860655  520828 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:41:15.860838  520828 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:41:15.861035  520828 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:41:15.861184  520828 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:41:15.947847  520828 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0730 00:41:16.000917  520828 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0730 00:41:16.057186  520828 main.go:141] libmachine: Stopping "ha-161305-m02"...
	I0730 00:41:16.057222  520828 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:41:16.059013  520828 main.go:141] libmachine: (ha-161305-m02) Calling .Stop
	I0730 00:41:16.063507  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 0/120
	I0730 00:41:17.064721  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 1/120
	I0730 00:41:18.066064  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 2/120
	I0730 00:41:19.067758  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 3/120
	I0730 00:41:20.069120  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 4/120
	I0730 00:41:21.071140  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 5/120
	I0730 00:41:22.072496  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 6/120
	I0730 00:41:23.073918  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 7/120
	I0730 00:41:24.075254  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 8/120
	I0730 00:41:25.076887  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 9/120
	I0730 00:41:26.079103  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 10/120
	I0730 00:41:27.080593  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 11/120
	I0730 00:41:28.082210  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 12/120
	I0730 00:41:29.083623  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 13/120
	I0730 00:41:30.084937  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 14/120
	I0730 00:41:31.087193  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 15/120
	I0730 00:41:32.088753  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 16/120
	I0730 00:41:33.090150  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 17/120
	I0730 00:41:34.091525  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 18/120
	I0730 00:41:35.093036  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 19/120
	I0730 00:41:36.094619  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 20/120
	I0730 00:41:37.096205  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 21/120
	I0730 00:41:38.098682  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 22/120
	I0730 00:41:39.100253  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 23/120
	I0730 00:41:40.101799  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 24/120
	I0730 00:41:41.103828  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 25/120
	I0730 00:41:42.105209  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 26/120
	I0730 00:41:43.106620  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 27/120
	I0730 00:41:44.108079  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 28/120
	I0730 00:41:45.109647  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 29/120
	I0730 00:41:46.111334  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 30/120
	I0730 00:41:47.112667  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 31/120
	I0730 00:41:48.114014  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 32/120
	I0730 00:41:49.115447  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 33/120
	I0730 00:41:50.116844  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 34/120
	I0730 00:41:51.118531  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 35/120
	I0730 00:41:52.120231  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 36/120
	I0730 00:41:53.121757  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 37/120
	I0730 00:41:54.123091  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 38/120
	I0730 00:41:55.125169  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 39/120
	I0730 00:41:56.126568  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 40/120
	I0730 00:41:57.127812  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 41/120
	I0730 00:41:58.129243  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 42/120
	I0730 00:41:59.131676  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 43/120
	I0730 00:42:00.134214  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 44/120
	I0730 00:42:01.136043  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 45/120
	I0730 00:42:02.137622  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 46/120
	I0730 00:42:03.139411  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 47/120
	I0730 00:42:04.140764  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 48/120
	I0730 00:42:05.142459  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 49/120
	I0730 00:42:06.143825  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 50/120
	I0730 00:42:07.145540  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 51/120
	I0730 00:42:08.147185  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 52/120
	I0730 00:42:09.148602  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 53/120
	I0730 00:42:10.150167  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 54/120
	I0730 00:42:11.152077  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 55/120
	I0730 00:42:12.153748  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 56/120
	I0730 00:42:13.155220  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 57/120
	I0730 00:42:14.157851  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 58/120
	I0730 00:42:15.159270  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 59/120
	I0730 00:42:16.161190  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 60/120
	I0730 00:42:17.162757  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 61/120
	I0730 00:42:18.164310  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 62/120
	I0730 00:42:19.165830  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 63/120
	I0730 00:42:20.167756  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 64/120
	I0730 00:42:21.169629  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 65/120
	I0730 00:42:22.170941  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 66/120
	I0730 00:42:23.172406  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 67/120
	I0730 00:42:24.173707  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 68/120
	I0730 00:42:25.175575  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 69/120
	I0730 00:42:26.177010  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 70/120
	I0730 00:42:27.179139  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 71/120
	I0730 00:42:28.181662  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 72/120
	I0730 00:42:29.183468  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 73/120
	I0730 00:42:30.185492  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 74/120
	I0730 00:42:31.187629  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 75/120
	I0730 00:42:32.189465  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 76/120
	I0730 00:42:33.191886  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 77/120
	I0730 00:42:34.193881  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 78/120
	I0730 00:42:35.195350  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 79/120
	I0730 00:42:36.197418  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 80/120
	I0730 00:42:37.198800  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 81/120
	I0730 00:42:38.200255  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 82/120
	I0730 00:42:39.202630  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 83/120
	I0730 00:42:40.204690  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 84/120
	I0730 00:42:41.206362  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 85/120
	I0730 00:42:42.208075  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 86/120
	I0730 00:42:43.209608  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 87/120
	I0730 00:42:44.211021  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 88/120
	I0730 00:42:45.212422  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 89/120
	I0730 00:42:46.214549  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 90/120
	I0730 00:42:47.216129  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 91/120
	I0730 00:42:48.217821  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 92/120
	I0730 00:42:49.219317  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 93/120
	I0730 00:42:50.220868  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 94/120
	I0730 00:42:51.223106  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 95/120
	I0730 00:42:52.224962  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 96/120
	I0730 00:42:53.226300  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 97/120
	I0730 00:42:54.228585  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 98/120
	I0730 00:42:55.230185  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 99/120
	I0730 00:42:56.232433  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 100/120
	I0730 00:42:57.234874  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 101/120
	I0730 00:42:58.236520  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 102/120
	I0730 00:42:59.238036  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 103/120
	I0730 00:43:00.239489  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 104/120
	I0730 00:43:01.241589  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 105/120
	I0730 00:43:02.243777  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 106/120
	I0730 00:43:03.246133  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 107/120
	I0730 00:43:04.247849  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 108/120
	I0730 00:43:05.249488  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 109/120
	I0730 00:43:06.251305  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 110/120
	I0730 00:43:07.252816  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 111/120
	I0730 00:43:08.254465  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 112/120
	I0730 00:43:09.255796  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 113/120
	I0730 00:43:10.257322  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 114/120
	I0730 00:43:11.259525  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 115/120
	I0730 00:43:12.260895  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 116/120
	I0730 00:43:13.263395  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 117/120
	I0730 00:43:14.264651  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 118/120
	I0730 00:43:15.266948  520828 main.go:141] libmachine: (ha-161305-m02) Waiting for machine to stop 119/120
	I0730 00:43:16.267955  520828 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0730 00:43:16.268136  520828 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-161305 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (19.085297668s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:43:16.317467  521255 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:43:16.317731  521255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:16.317740  521255 out.go:304] Setting ErrFile to fd 2...
	I0730 00:43:16.317745  521255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:16.317939  521255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:43:16.318185  521255 out.go:298] Setting JSON to false
	I0730 00:43:16.318214  521255 mustload.go:65] Loading cluster: ha-161305
	I0730 00:43:16.318360  521255 notify.go:220] Checking for updates...
	I0730 00:43:16.318582  521255 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:43:16.318599  521255 status.go:255] checking status of ha-161305 ...
	I0730 00:43:16.318979  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:16.319042  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:16.342275  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0730 00:43:16.342831  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:16.343595  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:16.343624  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:16.344066  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:16.344281  521255 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:43:16.346184  521255 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:43:16.346201  521255 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:16.346578  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:16.346637  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:16.362210  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0730 00:43:16.362652  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:16.363163  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:16.363189  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:16.363527  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:16.363709  521255 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:43:16.366593  521255 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:16.367011  521255 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:16.367045  521255 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:16.367166  521255 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:16.367463  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:16.367518  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:16.382465  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I0730 00:43:16.382880  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:16.383403  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:16.383423  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:16.383783  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:16.383956  521255 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:43:16.384183  521255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:16.384208  521255 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:43:16.387451  521255 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:16.387938  521255 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:16.387970  521255 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:16.388142  521255 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:43:16.388316  521255 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:43:16.388522  521255 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:43:16.388670  521255 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:43:16.473867  521255 ssh_runner.go:195] Run: systemctl --version
	I0730 00:43:16.480448  521255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:16.496953  521255 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:16.497010  521255 api_server.go:166] Checking apiserver status ...
	I0730 00:43:16.497047  521255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:16.518552  521255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:43:16.528046  521255 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:16.528114  521255 ssh_runner.go:195] Run: ls
	I0730 00:43:16.532299  521255 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:16.537929  521255 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:16.537956  521255 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:43:16.537967  521255 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:16.537985  521255 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:43:16.538277  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:16.538313  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:16.555439  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I0730 00:43:16.555921  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:16.556465  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:16.556492  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:16.556861  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:16.557039  521255 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:43:16.558607  521255 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:43:16.558621  521255 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:16.558937  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:16.558974  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:16.576376  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I0730 00:43:16.576867  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:16.577465  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:16.577491  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:16.577888  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:16.578191  521255 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:43:16.580680  521255 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:16.581134  521255 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:16.581176  521255 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:16.581307  521255 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:16.581612  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:16.581650  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:16.597316  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I0730 00:43:16.597699  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:16.598203  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:16.598231  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:16.598614  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:16.598835  521255 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:43:16.599059  521255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:16.599081  521255 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:43:16.601722  521255 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:16.602122  521255 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:16.602148  521255 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:16.602300  521255 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:43:16.602474  521255 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:43:16.602632  521255 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:43:16.602756  521255 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	W0730 00:43:34.993011  521255 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:43:34.993148  521255 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0730 00:43:34.993177  521255 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:34.993190  521255 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:43:34.993221  521255 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:34.993232  521255 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:43:34.993589  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:34.993659  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:35.010173  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I0730 00:43:35.010745  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:35.011339  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:35.011370  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:35.011705  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:35.011898  521255 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:43:35.013470  521255 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:43:35.013491  521255 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:35.013869  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:35.013914  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:35.031645  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0730 00:43:35.032133  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:35.032693  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:35.032729  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:35.033059  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:35.033283  521255 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:43:35.036172  521255 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:35.036603  521255 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:35.036636  521255 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:35.036798  521255 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:35.037142  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:35.037186  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:35.054072  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0730 00:43:35.054506  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:35.054944  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:35.054963  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:35.055308  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:35.055496  521255 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:43:35.055666  521255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:35.055692  521255 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:43:35.058539  521255 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:35.059001  521255 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:35.059042  521255 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:35.059207  521255 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:43:35.059363  521255 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:43:35.059521  521255 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:43:35.059632  521255 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:43:35.141041  521255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:35.156883  521255 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:35.156921  521255 api_server.go:166] Checking apiserver status ...
	I0730 00:43:35.156965  521255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:35.172209  521255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:43:35.182745  521255 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:35.182803  521255 ssh_runner.go:195] Run: ls
	I0730 00:43:35.186729  521255 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:35.190933  521255 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:35.190960  521255 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:43:35.190970  521255 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:35.190988  521255 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:43:35.191346  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:35.191386  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:35.207196  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0730 00:43:35.207658  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:35.208142  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:35.208162  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:35.208523  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:35.208749  521255 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:43:35.210324  521255 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:43:35.210340  521255 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:35.210607  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:35.210639  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:35.225889  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I0730 00:43:35.226429  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:35.226961  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:35.226982  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:35.227335  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:35.227537  521255 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:43:35.230343  521255 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:35.230967  521255 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:35.230969  521255 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:35.231082  521255 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:35.231364  521255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:35.231411  521255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:35.247458  521255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0730 00:43:35.247865  521255 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:35.248333  521255 main.go:141] libmachine: Using API Version  1
	I0730 00:43:35.248353  521255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:35.248692  521255 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:35.248908  521255 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:43:35.249080  521255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:35.249105  521255 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:43:35.252029  521255 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:35.252471  521255 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:35.252497  521255 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:35.252659  521255 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:43:35.252846  521255 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:43:35.252998  521255 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:43:35.253150  521255 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:43:35.337233  521255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:35.352713  521255 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-161305 -n ha-161305
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-161305 logs -n 25: (1.471770731s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305:/home/docker/cp-test_ha-161305-m03_ha-161305.txt                       |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305 sudo cat                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305.txt                                 |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m04 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp testdata/cp-test.txt                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305:/home/docker/cp-test_ha-161305-m04_ha-161305.txt                       |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305 sudo cat                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305.txt                                 |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03:/home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m03 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-161305 node stop m02 -v=7                                                     | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:36:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:36:28.665664  516753 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:36:28.665890  516753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:36:28.665903  516753 out.go:304] Setting ErrFile to fd 2...
	I0730 00:36:28.665916  516753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:36:28.666443  516753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:36:28.667059  516753 out.go:298] Setting JSON to false
	I0730 00:36:28.668005  516753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8331,"bootTime":1722291458,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:36:28.668072  516753 start.go:139] virtualization: kvm guest
	I0730 00:36:28.670170  516753 out.go:177] * [ha-161305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:36:28.671509  516753 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:36:28.671514  516753 notify.go:220] Checking for updates...
	I0730 00:36:28.674276  516753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:36:28.675589  516753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:36:28.676888  516753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:36:28.678247  516753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:36:28.679713  516753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:36:28.681221  516753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:36:28.717149  516753 out.go:177] * Using the kvm2 driver based on user configuration
	I0730 00:36:28.718317  516753 start.go:297] selected driver: kvm2
	I0730 00:36:28.718336  516753 start.go:901] validating driver "kvm2" against <nil>
	I0730 00:36:28.718354  516753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:36:28.719473  516753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:36:28.719565  516753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:36:28.735693  516753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:36:28.735761  516753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 00:36:28.736094  516753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:36:28.736182  516753 cni.go:84] Creating CNI manager for ""
	I0730 00:36:28.736199  516753 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0730 00:36:28.736211  516753 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0730 00:36:28.736292  516753 start.go:340] cluster config:
	{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0730 00:36:28.736440  516753 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:36:28.738404  516753 out.go:177] * Starting "ha-161305" primary control-plane node in "ha-161305" cluster
	I0730 00:36:28.739904  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:36:28.739969  516753 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:36:28.739984  516753 cache.go:56] Caching tarball of preloaded images
	I0730 00:36:28.740079  516753 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:36:28.740094  516753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:36:28.741152  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:36:28.741200  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json: {Name:mk0edeef8de82386ac1fad0fbd86252925ee5418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:28.741403  516753 start.go:360] acquireMachinesLock for ha-161305: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:36:28.741439  516753 start.go:364] duration metric: took 20.343µs to acquireMachinesLock for "ha-161305"
	I0730 00:36:28.741459  516753 start.go:93] Provisioning new machine with config: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:36:28.741617  516753 start.go:125] createHost starting for "" (driver="kvm2")
	I0730 00:36:28.743370  516753 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 00:36:28.743572  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:36:28.743621  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:36:28.759060  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I0730 00:36:28.759468  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:36:28.760031  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:36:28.760059  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:36:28.760391  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:36:28.760580  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:28.760744  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:28.760894  516753 start.go:159] libmachine.API.Create for "ha-161305" (driver="kvm2")
	I0730 00:36:28.760920  516753 client.go:168] LocalClient.Create starting
	I0730 00:36:28.760974  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:36:28.761013  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:36:28.761032  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:36:28.761092  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:36:28.761119  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:36:28.761135  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:36:28.761265  516753 main.go:141] libmachine: Running pre-create checks...
	I0730 00:36:28.761292  516753 main.go:141] libmachine: (ha-161305) Calling .PreCreateCheck
	I0730 00:36:28.761634  516753 main.go:141] libmachine: (ha-161305) Calling .GetConfigRaw
	I0730 00:36:28.762027  516753 main.go:141] libmachine: Creating machine...
	I0730 00:36:28.762042  516753 main.go:141] libmachine: (ha-161305) Calling .Create
	I0730 00:36:28.762152  516753 main.go:141] libmachine: (ha-161305) Creating KVM machine...
	I0730 00:36:28.763494  516753 main.go:141] libmachine: (ha-161305) DBG | found existing default KVM network
	I0730 00:36:28.764231  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:28.764081  516776 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014870}
	I0730 00:36:28.764257  516753 main.go:141] libmachine: (ha-161305) DBG | created network xml: 
	I0730 00:36:28.764276  516753 main.go:141] libmachine: (ha-161305) DBG | <network>
	I0730 00:36:28.764288  516753 main.go:141] libmachine: (ha-161305) DBG |   <name>mk-ha-161305</name>
	I0730 00:36:28.764301  516753 main.go:141] libmachine: (ha-161305) DBG |   <dns enable='no'/>
	I0730 00:36:28.764312  516753 main.go:141] libmachine: (ha-161305) DBG |   
	I0730 00:36:28.764324  516753 main.go:141] libmachine: (ha-161305) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0730 00:36:28.764333  516753 main.go:141] libmachine: (ha-161305) DBG |     <dhcp>
	I0730 00:36:28.764340  516753 main.go:141] libmachine: (ha-161305) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0730 00:36:28.764347  516753 main.go:141] libmachine: (ha-161305) DBG |     </dhcp>
	I0730 00:36:28.764353  516753 main.go:141] libmachine: (ha-161305) DBG |   </ip>
	I0730 00:36:28.764359  516753 main.go:141] libmachine: (ha-161305) DBG |   
	I0730 00:36:28.764366  516753 main.go:141] libmachine: (ha-161305) DBG | </network>
	I0730 00:36:28.764373  516753 main.go:141] libmachine: (ha-161305) DBG | 
	I0730 00:36:28.769353  516753 main.go:141] libmachine: (ha-161305) DBG | trying to create private KVM network mk-ha-161305 192.168.39.0/24...
	I0730 00:36:28.840386  516753 main.go:141] libmachine: (ha-161305) DBG | private KVM network mk-ha-161305 192.168.39.0/24 created
	I0730 00:36:28.840425  516753 main.go:141] libmachine: (ha-161305) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305 ...
	I0730 00:36:28.840444  516753 main.go:141] libmachine: (ha-161305) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:36:28.840464  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:28.840417  516776 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:36:28.840589  516753 main.go:141] libmachine: (ha-161305) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:36:29.119872  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:29.119739  516776 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa...
	I0730 00:36:29.284121  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:29.283967  516776 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/ha-161305.rawdisk...
	I0730 00:36:29.284155  516753 main.go:141] libmachine: (ha-161305) DBG | Writing magic tar header
	I0730 00:36:29.284166  516753 main.go:141] libmachine: (ha-161305) DBG | Writing SSH key tar header
	I0730 00:36:29.284173  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:29.284111  516776 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305 ...
	I0730 00:36:29.284307  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305
	I0730 00:36:29.284350  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305 (perms=drwx------)
	I0730 00:36:29.284365  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:36:29.284379  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:36:29.284390  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:36:29.284399  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:36:29.284409  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:36:29.284424  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home
	I0730 00:36:29.284442  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:36:29.284453  516753 main.go:141] libmachine: (ha-161305) DBG | Skipping /home - not owner
	I0730 00:36:29.284470  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:36:29.284483  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:36:29.284497  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:36:29.284508  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:36:29.284520  516753 main.go:141] libmachine: (ha-161305) Creating domain...
	I0730 00:36:29.285579  516753 main.go:141] libmachine: (ha-161305) define libvirt domain using xml: 
	I0730 00:36:29.285610  516753 main.go:141] libmachine: (ha-161305) <domain type='kvm'>
	I0730 00:36:29.285621  516753 main.go:141] libmachine: (ha-161305)   <name>ha-161305</name>
	I0730 00:36:29.285629  516753 main.go:141] libmachine: (ha-161305)   <memory unit='MiB'>2200</memory>
	I0730 00:36:29.285641  516753 main.go:141] libmachine: (ha-161305)   <vcpu>2</vcpu>
	I0730 00:36:29.285652  516753 main.go:141] libmachine: (ha-161305)   <features>
	I0730 00:36:29.285663  516753 main.go:141] libmachine: (ha-161305)     <acpi/>
	I0730 00:36:29.285672  516753 main.go:141] libmachine: (ha-161305)     <apic/>
	I0730 00:36:29.285681  516753 main.go:141] libmachine: (ha-161305)     <pae/>
	I0730 00:36:29.285693  516753 main.go:141] libmachine: (ha-161305)     
	I0730 00:36:29.285701  516753 main.go:141] libmachine: (ha-161305)   </features>
	I0730 00:36:29.285710  516753 main.go:141] libmachine: (ha-161305)   <cpu mode='host-passthrough'>
	I0730 00:36:29.285720  516753 main.go:141] libmachine: (ha-161305)   
	I0730 00:36:29.285727  516753 main.go:141] libmachine: (ha-161305)   </cpu>
	I0730 00:36:29.285735  516753 main.go:141] libmachine: (ha-161305)   <os>
	I0730 00:36:29.285743  516753 main.go:141] libmachine: (ha-161305)     <type>hvm</type>
	I0730 00:36:29.285753  516753 main.go:141] libmachine: (ha-161305)     <boot dev='cdrom'/>
	I0730 00:36:29.285767  516753 main.go:141] libmachine: (ha-161305)     <boot dev='hd'/>
	I0730 00:36:29.285779  516753 main.go:141] libmachine: (ha-161305)     <bootmenu enable='no'/>
	I0730 00:36:29.285786  516753 main.go:141] libmachine: (ha-161305)   </os>
	I0730 00:36:29.285795  516753 main.go:141] libmachine: (ha-161305)   <devices>
	I0730 00:36:29.285804  516753 main.go:141] libmachine: (ha-161305)     <disk type='file' device='cdrom'>
	I0730 00:36:29.285817  516753 main.go:141] libmachine: (ha-161305)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/boot2docker.iso'/>
	I0730 00:36:29.285829  516753 main.go:141] libmachine: (ha-161305)       <target dev='hdc' bus='scsi'/>
	I0730 00:36:29.285851  516753 main.go:141] libmachine: (ha-161305)       <readonly/>
	I0730 00:36:29.285871  516753 main.go:141] libmachine: (ha-161305)     </disk>
	I0730 00:36:29.285899  516753 main.go:141] libmachine: (ha-161305)     <disk type='file' device='disk'>
	I0730 00:36:29.285920  516753 main.go:141] libmachine: (ha-161305)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:36:29.285934  516753 main.go:141] libmachine: (ha-161305)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/ha-161305.rawdisk'/>
	I0730 00:36:29.285939  516753 main.go:141] libmachine: (ha-161305)       <target dev='hda' bus='virtio'/>
	I0730 00:36:29.285945  516753 main.go:141] libmachine: (ha-161305)     </disk>
	I0730 00:36:29.285952  516753 main.go:141] libmachine: (ha-161305)     <interface type='network'>
	I0730 00:36:29.285958  516753 main.go:141] libmachine: (ha-161305)       <source network='mk-ha-161305'/>
	I0730 00:36:29.285962  516753 main.go:141] libmachine: (ha-161305)       <model type='virtio'/>
	I0730 00:36:29.285967  516753 main.go:141] libmachine: (ha-161305)     </interface>
	I0730 00:36:29.285971  516753 main.go:141] libmachine: (ha-161305)     <interface type='network'>
	I0730 00:36:29.285977  516753 main.go:141] libmachine: (ha-161305)       <source network='default'/>
	I0730 00:36:29.285981  516753 main.go:141] libmachine: (ha-161305)       <model type='virtio'/>
	I0730 00:36:29.285986  516753 main.go:141] libmachine: (ha-161305)     </interface>
	I0730 00:36:29.285990  516753 main.go:141] libmachine: (ha-161305)     <serial type='pty'>
	I0730 00:36:29.285995  516753 main.go:141] libmachine: (ha-161305)       <target port='0'/>
	I0730 00:36:29.286003  516753 main.go:141] libmachine: (ha-161305)     </serial>
	I0730 00:36:29.286008  516753 main.go:141] libmachine: (ha-161305)     <console type='pty'>
	I0730 00:36:29.286012  516753 main.go:141] libmachine: (ha-161305)       <target type='serial' port='0'/>
	I0730 00:36:29.286025  516753 main.go:141] libmachine: (ha-161305)     </console>
	I0730 00:36:29.286034  516753 main.go:141] libmachine: (ha-161305)     <rng model='virtio'>
	I0730 00:36:29.286040  516753 main.go:141] libmachine: (ha-161305)       <backend model='random'>/dev/random</backend>
	I0730 00:36:29.286049  516753 main.go:141] libmachine: (ha-161305)     </rng>
	I0730 00:36:29.286054  516753 main.go:141] libmachine: (ha-161305)     
	I0730 00:36:29.286060  516753 main.go:141] libmachine: (ha-161305)     
	I0730 00:36:29.286094  516753 main.go:141] libmachine: (ha-161305)   </devices>
	I0730 00:36:29.286116  516753 main.go:141] libmachine: (ha-161305) </domain>
	I0730 00:36:29.286131  516753 main.go:141] libmachine: (ha-161305) 
	I0730 00:36:29.290560  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:48:5f:80 in network default
	I0730 00:36:29.291121  516753 main.go:141] libmachine: (ha-161305) Ensuring networks are active...
	I0730 00:36:29.291136  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:29.291772  516753 main.go:141] libmachine: (ha-161305) Ensuring network default is active
	I0730 00:36:29.292087  516753 main.go:141] libmachine: (ha-161305) Ensuring network mk-ha-161305 is active
	I0730 00:36:29.292564  516753 main.go:141] libmachine: (ha-161305) Getting domain xml...
	I0730 00:36:29.293265  516753 main.go:141] libmachine: (ha-161305) Creating domain...
	I0730 00:36:30.485952  516753 main.go:141] libmachine: (ha-161305) Waiting to get IP...
	I0730 00:36:30.486728  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:30.487172  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:30.487213  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:30.487165  516776 retry.go:31] will retry after 239.783115ms: waiting for machine to come up
	I0730 00:36:30.728669  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:30.729085  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:30.729112  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:30.729046  516776 retry.go:31] will retry after 334.71581ms: waiting for machine to come up
	I0730 00:36:31.065673  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:31.066051  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:31.066088  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:31.066028  516776 retry.go:31] will retry after 442.95444ms: waiting for machine to come up
	I0730 00:36:31.510831  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:31.511275  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:31.511298  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:31.511253  516776 retry.go:31] will retry after 609.120594ms: waiting for machine to come up
	I0730 00:36:32.121947  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:32.122399  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:32.122429  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:32.122317  516776 retry.go:31] will retry after 627.70006ms: waiting for machine to come up
	I0730 00:36:32.751197  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:32.751641  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:32.751693  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:32.751622  516776 retry.go:31] will retry after 574.420516ms: waiting for machine to come up
	I0730 00:36:33.327441  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:33.327861  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:33.327901  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:33.327809  516776 retry.go:31] will retry after 830.453811ms: waiting for machine to come up
	I0730 00:36:34.159438  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:34.159812  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:34.159836  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:34.159774  516776 retry.go:31] will retry after 954.381064ms: waiting for machine to come up
	I0730 00:36:35.116062  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:35.116448  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:35.116478  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:35.116404  516776 retry.go:31] will retry after 1.732818187s: waiting for machine to come up
	I0730 00:36:36.851343  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:36.851780  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:36.851811  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:36.851730  516776 retry.go:31] will retry after 1.834904059s: waiting for machine to come up
	I0730 00:36:38.688038  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:38.688585  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:38.688618  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:38.688530  516776 retry.go:31] will retry after 2.495048845s: waiting for machine to come up
	I0730 00:36:41.184694  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:41.185264  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:41.185289  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:41.185205  516776 retry.go:31] will retry after 2.40860982s: waiting for machine to come up
	I0730 00:36:43.596830  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:43.597316  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:43.597343  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:43.597271  516776 retry.go:31] will retry after 3.976089322s: waiting for machine to come up
	I0730 00:36:47.577942  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:47.578387  516753 main.go:141] libmachine: (ha-161305) Found IP for machine: 192.168.39.80
	I0730 00:36:47.578413  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has current primary IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:47.578426  516753 main.go:141] libmachine: (ha-161305) Reserving static IP address...
	I0730 00:36:47.578729  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find host DHCP lease matching {name: "ha-161305", mac: "52:54:00:11:58:6f", ip: "192.168.39.80"} in network mk-ha-161305
	I0730 00:36:47.651095  516753 main.go:141] libmachine: (ha-161305) DBG | Getting to WaitForSSH function...
	I0730 00:36:47.651129  516753 main.go:141] libmachine: (ha-161305) Reserved static IP address: 192.168.39.80
	I0730 00:36:47.651141  516753 main.go:141] libmachine: (ha-161305) Waiting for SSH to be available...
	I0730 00:36:47.653320  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:47.653624  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305
	I0730 00:36:47.653658  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find defined IP address of network mk-ha-161305 interface with MAC address 52:54:00:11:58:6f
	I0730 00:36:47.653813  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH client type: external
	I0730 00:36:47.653857  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa (-rw-------)
	I0730 00:36:47.653897  516753 main.go:141] libmachine: (ha-161305) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:36:47.653916  516753 main.go:141] libmachine: (ha-161305) DBG | About to run SSH command:
	I0730 00:36:47.653932  516753 main.go:141] libmachine: (ha-161305) DBG | exit 0
	I0730 00:36:47.657769  516753 main.go:141] libmachine: (ha-161305) DBG | SSH cmd err, output: exit status 255: 
	I0730 00:36:47.657788  516753 main.go:141] libmachine: (ha-161305) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0730 00:36:47.657795  516753 main.go:141] libmachine: (ha-161305) DBG | command : exit 0
	I0730 00:36:47.657800  516753 main.go:141] libmachine: (ha-161305) DBG | err     : exit status 255
	I0730 00:36:47.657806  516753 main.go:141] libmachine: (ha-161305) DBG | output  : 
	I0730 00:36:50.658959  516753 main.go:141] libmachine: (ha-161305) DBG | Getting to WaitForSSH function...
	I0730 00:36:50.661233  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.661552  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:50.661578  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.661661  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH client type: external
	I0730 00:36:50.661684  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa (-rw-------)
	I0730 00:36:50.661704  516753 main.go:141] libmachine: (ha-161305) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:36:50.661717  516753 main.go:141] libmachine: (ha-161305) DBG | About to run SSH command:
	I0730 00:36:50.661726  516753 main.go:141] libmachine: (ha-161305) DBG | exit 0
	I0730 00:36:50.788532  516753 main.go:141] libmachine: (ha-161305) DBG | SSH cmd err, output: <nil>: 
	I0730 00:36:50.788857  516753 main.go:141] libmachine: (ha-161305) KVM machine creation complete!
	I0730 00:36:50.789193  516753 main.go:141] libmachine: (ha-161305) Calling .GetConfigRaw
	I0730 00:36:50.789777  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:50.789988  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:50.790144  516753 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:36:50.790161  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:36:50.791244  516753 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:36:50.791258  516753 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:36:50.791263  516753 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:36:50.791268  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:50.793663  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.794007  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:50.794027  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.794165  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:50.794342  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.794507  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.794664  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:50.794836  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:50.795128  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:50.795144  516753 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:36:50.904092  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:36:50.904121  516753 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:36:50.904134  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:50.906802  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.907194  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:50.907225  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.907374  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:50.907633  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.907794  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.907942  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:50.908184  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:50.908436  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:50.908450  516753 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:36:51.021254  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:36:51.021349  516753 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:36:51.021364  516753 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:36:51.021380  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:51.021661  516753 buildroot.go:166] provisioning hostname "ha-161305"
	I0730 00:36:51.021694  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:51.021868  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.024286  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.024603  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.024629  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.024726  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.024898  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.025041  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.025219  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.025381  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:51.025570  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:51.025585  516753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305 && echo "ha-161305" | sudo tee /etc/hostname
	I0730 00:36:51.149628  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:36:51.149675  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.152336  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.152651  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.152673  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.152955  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.153209  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.153388  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.153535  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.153679  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:51.153894  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:51.153918  516753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:36:51.272933  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:36:51.272971  516753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:36:51.273004  516753 buildroot.go:174] setting up certificates
	I0730 00:36:51.273041  516753 provision.go:84] configureAuth start
	I0730 00:36:51.273063  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:51.273376  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:51.276188  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.276543  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.276572  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.276731  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.278888  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.279207  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.279234  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.279332  516753 provision.go:143] copyHostCerts
	I0730 00:36:51.279368  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:36:51.279420  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:36:51.279439  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:36:51.279508  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:36:51.279633  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:36:51.279656  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:36:51.279664  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:36:51.279692  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:36:51.279737  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:36:51.279753  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:36:51.279759  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:36:51.279780  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:36:51.279828  516753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305 san=[127.0.0.1 192.168.39.80 ha-161305 localhost minikube]
	I0730 00:36:51.487281  516753 provision.go:177] copyRemoteCerts
	I0730 00:36:51.487351  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:36:51.487378  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.490053  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.490403  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.490433  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.490564  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.490767  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.490939  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.491079  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:51.574497  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:36:51.574583  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0730 00:36:51.596184  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:36:51.596261  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:36:51.617691  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:36:51.617771  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:36:51.638785  516753 provision.go:87] duration metric: took 365.724901ms to configureAuth
	I0730 00:36:51.638814  516753 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:36:51.638988  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:36:51.639061  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.641680  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.641975  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.641998  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.642137  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.642374  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.642561  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.642745  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.642912  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:51.643137  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:51.643156  516753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:36:51.909063  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:36:51.909102  516753 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:36:51.909113  516753 main.go:141] libmachine: (ha-161305) Calling .GetURL
	I0730 00:36:51.910422  516753 main.go:141] libmachine: (ha-161305) DBG | Using libvirt version 6000000
	I0730 00:36:51.912944  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.913304  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.913331  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.913520  516753 main.go:141] libmachine: Docker is up and running!
	I0730 00:36:51.913547  516753 main.go:141] libmachine: Reticulating splines...
	I0730 00:36:51.913560  516753 client.go:171] duration metric: took 23.152629816s to LocalClient.Create
	I0730 00:36:51.913590  516753 start.go:167] duration metric: took 23.152697956s to libmachine.API.Create "ha-161305"
	I0730 00:36:51.913602  516753 start.go:293] postStartSetup for "ha-161305" (driver="kvm2")
	I0730 00:36:51.913616  516753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:36:51.913639  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:51.913876  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:36:51.913901  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.915857  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.916183  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.916209  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.916342  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.916522  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.916733  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.916868  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:52.003019  516753 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:36:52.007144  516753 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:36:52.007172  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:36:52.007251  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:36:52.007361  516753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:36:52.007376  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:36:52.007499  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:36:52.016416  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:36:52.040876  516753 start.go:296] duration metric: took 127.258508ms for postStartSetup
	I0730 00:36:52.040938  516753 main.go:141] libmachine: (ha-161305) Calling .GetConfigRaw
	I0730 00:36:52.041604  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:52.043938  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.044291  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.044334  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.044578  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:36:52.044782  516753 start.go:128] duration metric: took 23.303148719s to createHost
	I0730 00:36:52.044807  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:52.047035  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.047331  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.047354  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.047494  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:52.047702  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.047910  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.048082  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:52.048243  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:52.048418  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:52.048428  516753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:36:52.157068  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722299812.133921114
	
	I0730 00:36:52.157091  516753 fix.go:216] guest clock: 1722299812.133921114
	I0730 00:36:52.157099  516753 fix.go:229] Guest: 2024-07-30 00:36:52.133921114 +0000 UTC Remote: 2024-07-30 00:36:52.044794617 +0000 UTC m=+23.414617294 (delta=89.126497ms)
	I0730 00:36:52.157138  516753 fix.go:200] guest clock delta is within tolerance: 89.126497ms
	I0730 00:36:52.157145  516753 start.go:83] releasing machines lock for "ha-161305", held for 23.415698873s
	I0730 00:36:52.157166  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.157441  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:52.159934  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.160295  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.160321  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.160463  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.160968  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.161121  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.161200  516753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:36:52.161249  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:52.161362  516753 ssh_runner.go:195] Run: cat /version.json
	I0730 00:36:52.161390  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:52.163860  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164135  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164179  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.164201  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164371  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:52.164542  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.164585  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.164609  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164727  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:52.164786  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:52.164867  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:52.164987  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.165132  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:52.165321  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:52.276182  516753 ssh_runner.go:195] Run: systemctl --version
	I0730 00:36:52.281852  516753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:36:52.439457  516753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:36:52.444741  516753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:36:52.444803  516753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:36:52.460399  516753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:36:52.460430  516753 start.go:495] detecting cgroup driver to use...
	I0730 00:36:52.460514  516753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:36:52.475665  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:36:52.488459  516753 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:36:52.488535  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:36:52.501535  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:36:52.514467  516753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:36:52.627090  516753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:36:52.788767  516753 docker.go:233] disabling docker service ...
	I0730 00:36:52.788852  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:36:52.802434  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:36:52.814436  516753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:36:52.921251  516753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:36:53.028623  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:36:53.042213  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:36:53.060248  516753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:36:53.060320  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.070414  516753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:36:53.070477  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.080480  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.090281  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.100034  516753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:36:53.109808  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.119641  516753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.135491  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.145379  516753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:36:53.154207  516753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:36:53.154262  516753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:36:53.166031  516753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:36:53.175065  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:36:53.290658  516753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:36:53.423478  516753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:36:53.423568  516753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:36:53.428094  516753 start.go:563] Will wait 60s for crictl version
	I0730 00:36:53.428157  516753 ssh_runner.go:195] Run: which crictl
	I0730 00:36:53.431658  516753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:36:53.465361  516753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:36:53.465460  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:36:53.492262  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:36:53.526188  516753 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:36:53.527332  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:53.530247  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:53.530612  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:53.530634  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:53.530930  516753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:36:53.534585  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:36:53.546420  516753 kubeadm.go:883] updating cluster {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:36:53.546534  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:36:53.546588  516753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:36:53.577859  516753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0730 00:36:53.577943  516753 ssh_runner.go:195] Run: which lz4
	I0730 00:36:53.581468  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0730 00:36:53.581568  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0730 00:36:53.585294  516753 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0730 00:36:53.585326  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0730 00:36:54.816391  516753 crio.go:462] duration metric: took 1.234848456s to copy over tarball
	I0730 00:36:54.816475  516753 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0730 00:36:56.911570  516753 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.095054945s)
	I0730 00:36:56.911599  516753 crio.go:469] duration metric: took 2.095181748s to extract the tarball
	I0730 00:36:56.911608  516753 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0730 00:36:56.948772  516753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:36:56.992406  516753 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:36:56.992435  516753 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:36:56.992445  516753 kubeadm.go:934] updating node { 192.168.39.80 8443 v1.30.3 crio true true} ...
	I0730 00:36:56.992565  516753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:36:56.992637  516753 ssh_runner.go:195] Run: crio config
	I0730 00:36:57.041933  516753 cni.go:84] Creating CNI manager for ""
	I0730 00:36:57.041951  516753 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0730 00:36:57.041961  516753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:36:57.041989  516753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-161305 NodeName:ha-161305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:36:57.042155  516753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-161305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:36:57.042192  516753 kube-vip.go:115] generating kube-vip config ...
	I0730 00:36:57.042237  516753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:36:57.059814  516753 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:36:57.059952  516753 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:36:57.060023  516753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:36:57.070656  516753 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:36:57.070744  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0730 00:36:57.079149  516753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0730 00:36:57.094014  516753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:36:57.108514  516753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0730 00:36:57.123209  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0730 00:36:57.138028  516753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:36:57.141570  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:36:57.152390  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:36:57.258370  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:36:57.274613  516753 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.80
	I0730 00:36:57.274643  516753 certs.go:194] generating shared ca certs ...
	I0730 00:36:57.274667  516753 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.274869  516753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:36:57.274934  516753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:36:57.274947  516753 certs.go:256] generating profile certs ...
	I0730 00:36:57.275035  516753 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:36:57.275054  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt with IP's: []
	I0730 00:36:57.389571  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt ...
	I0730 00:36:57.389613  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt: {Name:mk843da8ae9ed625b23bd908faf33ddb4ca461d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.389868  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key ...
	I0730 00:36:57.389891  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key: {Name:mk274e912f3472d2666bb12e5007c3c4813bd0a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.390021  516753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c
	I0730 00:36:57.390045  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.254]
	I0730 00:36:57.498383  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c ...
	I0730 00:36:57.498417  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c: {Name:mk2a527a45349e6fa9ab7deb641f7395792f53c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.498583  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c ...
	I0730 00:36:57.498595  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c: {Name:mk05d6758edb948cdfd9957e0f080b273a5f0228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.498665  516753 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:36:57.498735  516753 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:36:57.498788  516753 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:36:57.498802  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt with IP's: []
	I0730 00:36:57.601866  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt ...
	I0730 00:36:57.601898  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt: {Name:mk095a9a459cefeb454917fa27f54c463b594076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.602058  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key ...
	I0730 00:36:57.602068  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key: {Name:mkd155d484a412cbbfe26d3a22d9b60af6c16e24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.602133  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:36:57.602150  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:36:57.602161  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:36:57.602174  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:36:57.602187  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:36:57.602199  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:36:57.602211  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:36:57.602223  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:36:57.602277  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:36:57.602310  516753 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:36:57.602317  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:36:57.602342  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:36:57.602362  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:36:57.602383  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:36:57.602419  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:36:57.602444  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.602458  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.602472  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.602981  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:36:57.626494  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:36:57.647705  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:36:57.669245  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:36:57.691116  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0730 00:36:57.712595  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:36:57.734074  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:36:57.758402  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:36:57.782242  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:36:57.804174  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:36:57.826153  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:36:57.847907  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:36:57.863640  516753 ssh_runner.go:195] Run: openssl version
	I0730 00:36:57.868944  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:36:57.878813  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.882829  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.882898  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.888227  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:36:57.897990  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:36:57.907698  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.911693  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.911741  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.917271  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:36:57.927334  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:36:57.937303  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.941240  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.941297  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.946456  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:36:57.956332  516753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:36:57.959973  516753 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:36:57.960030  516753 kubeadm.go:392] StartCluster: {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:36:57.960102  516753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:36:57.960143  516753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:36:57.994408  516753 cri.go:89] found id: ""
	I0730 00:36:57.994476  516753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 00:36:58.003813  516753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 00:36:58.012921  516753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 00:36:58.024757  516753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 00:36:58.024780  516753 kubeadm.go:157] found existing configuration files:
	
	I0730 00:36:58.024825  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 00:36:58.036125  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 00:36:58.036202  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 00:36:58.048562  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 00:36:58.061788  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 00:36:58.061844  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 00:36:58.074477  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 00:36:58.084595  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 00:36:58.084662  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 00:36:58.097064  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 00:36:58.105879  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 00:36:58.105934  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 00:36:58.114897  516753 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0730 00:36:58.218700  516753 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0730 00:36:58.218764  516753 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 00:36:58.335153  516753 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 00:36:58.335290  516753 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 00:36:58.335438  516753 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 00:36:58.526324  516753 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 00:36:58.528609  516753 out.go:204]   - Generating certificates and keys ...
	I0730 00:36:58.528725  516753 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 00:36:58.528797  516753 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 00:36:58.612166  516753 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 00:36:58.888864  516753 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 00:36:59.081294  516753 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 00:36:59.174030  516753 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 00:36:59.254970  516753 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 00:36:59.255271  516753 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-161305 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I0730 00:36:59.391004  516753 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 00:36:59.391352  516753 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-161305 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I0730 00:36:59.467999  516753 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 00:36:59.584232  516753 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 00:37:00.068580  516753 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 00:37:00.068665  516753 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 00:37:00.222101  516753 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 00:37:00.294638  516753 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0730 00:37:00.673109  516753 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 00:37:00.790780  516753 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 00:37:01.027593  516753 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 00:37:01.027998  516753 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 00:37:01.030626  516753 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 00:37:01.032462  516753 out.go:204]   - Booting up control plane ...
	I0730 00:37:01.032586  516753 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 00:37:01.032737  516753 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 00:37:01.032857  516753 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 00:37:01.047285  516753 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 00:37:01.050405  516753 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 00:37:01.050477  516753 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 00:37:01.181508  516753 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0730 00:37:01.181612  516753 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0730 00:37:01.682528  516753 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.374331ms
	I0730 00:37:01.682641  516753 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0730 00:37:07.646757  516753 kubeadm.go:310] [api-check] The API server is healthy after 5.96759416s
	I0730 00:37:07.659136  516753 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0730 00:37:07.675124  516753 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0730 00:37:07.702355  516753 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0730 00:37:07.702616  516753 kubeadm.go:310] [mark-control-plane] Marking the node ha-161305 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0730 00:37:07.718852  516753 kubeadm.go:310] [bootstrap-token] Using token: r6ju3c.hq3k4ysj5ca33xmr
	I0730 00:37:07.720572  516753 out.go:204]   - Configuring RBAC rules ...
	I0730 00:37:07.720767  516753 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0730 00:37:07.728868  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0730 00:37:07.744401  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0730 00:37:07.749030  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0730 00:37:07.752564  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0730 00:37:07.756175  516753 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0730 00:37:08.054029  516753 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0730 00:37:08.493513  516753 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0730 00:37:09.054914  516753 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0730 00:37:09.055973  516753 kubeadm.go:310] 
	I0730 00:37:09.056050  516753 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0730 00:37:09.056058  516753 kubeadm.go:310] 
	I0730 00:37:09.056159  516753 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0730 00:37:09.056179  516753 kubeadm.go:310] 
	I0730 00:37:09.056209  516753 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0730 00:37:09.056349  516753 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0730 00:37:09.056416  516753 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0730 00:37:09.056428  516753 kubeadm.go:310] 
	I0730 00:37:09.056481  516753 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0730 00:37:09.056489  516753 kubeadm.go:310] 
	I0730 00:37:09.056544  516753 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0730 00:37:09.056570  516753 kubeadm.go:310] 
	I0730 00:37:09.056656  516753 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0730 00:37:09.056768  516753 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0730 00:37:09.056866  516753 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0730 00:37:09.056877  516753 kubeadm.go:310] 
	I0730 00:37:09.056982  516753 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0730 00:37:09.057097  516753 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0730 00:37:09.057107  516753 kubeadm.go:310] 
	I0730 00:37:09.057215  516753 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r6ju3c.hq3k4ysj5ca33xmr \
	I0730 00:37:09.057374  516753 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 \
	I0730 00:37:09.057415  516753 kubeadm.go:310] 	--control-plane 
	I0730 00:37:09.057423  516753 kubeadm.go:310] 
	I0730 00:37:09.057534  516753 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0730 00:37:09.057553  516753 kubeadm.go:310] 
	I0730 00:37:09.057674  516753 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r6ju3c.hq3k4ysj5ca33xmr \
	I0730 00:37:09.057816  516753 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 
	I0730 00:37:09.058064  516753 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 00:37:09.058085  516753 cni.go:84] Creating CNI manager for ""
	I0730 00:37:09.058093  516753 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0730 00:37:09.060438  516753 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0730 00:37:09.061718  516753 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0730 00:37:09.066780  516753 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0730 00:37:09.066799  516753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0730 00:37:09.086768  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0730 00:37:09.490993  516753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0730 00:37:09.491069  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:09.491098  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-161305 minikube.k8s.io/updated_at=2024_07_30T00_37_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=ha-161305 minikube.k8s.io/primary=true
	I0730 00:37:09.628964  516753 ops.go:34] apiserver oom_adj: -16
	I0730 00:37:09.646846  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:10.147120  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:10.647387  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:11.147045  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:11.647233  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:12.146973  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:12.647507  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:13.147490  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:13.647655  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:14.147826  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:14.647712  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:15.147373  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:15.647863  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:16.147195  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:16.646934  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:17.146898  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:17.647144  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:18.147654  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:18.647934  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:19.146949  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:19.646894  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:20.147326  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:20.647256  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:21.147642  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:21.647509  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:21.733926  516753 kubeadm.go:1113] duration metric: took 12.24291342s to wait for elevateKubeSystemPrivileges
	I0730 00:37:21.733960  516753 kubeadm.go:394] duration metric: took 23.773935661s to StartCluster
	I0730 00:37:21.733985  516753 settings.go:142] acquiring lock: {Name:mk89b2537c1ec20302d90ab73b167422bb363b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:21.734072  516753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:37:21.734927  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/kubeconfig: {Name:mk6ecf4e5b07b810f1fa2b9790857d7586f0cf41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:21.735193  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0730 00:37:21.735204  516753 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:37:21.735232  516753 start.go:241] waiting for startup goroutines ...
	I0730 00:37:21.735242  516753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0730 00:37:21.735312  516753 addons.go:69] Setting storage-provisioner=true in profile "ha-161305"
	I0730 00:37:21.735329  516753 addons.go:69] Setting default-storageclass=true in profile "ha-161305"
	I0730 00:37:21.735344  516753 addons.go:234] Setting addon storage-provisioner=true in "ha-161305"
	I0730 00:37:21.735357  516753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-161305"
	I0730 00:37:21.735397  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:37:21.735425  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:21.735742  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.735774  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.735808  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.735849  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.750956  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0730 00:37:21.751378  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.751898  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.751921  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.752275  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.752477  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:21.754660  516753 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:37:21.754899  516753 kapi.go:59] client config for ha-161305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0730 00:37:21.755364  516753 cert_rotation.go:137] Starting client certificate rotation controller
	I0730 00:37:21.755557  516753 addons.go:234] Setting addon default-storageclass=true in "ha-161305"
	I0730 00:37:21.755594  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:37:21.755877  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.755920  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.756851  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I0730 00:37:21.757395  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.757968  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.757993  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.758361  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.758864  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.758917  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.771946  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0730 00:37:21.772486  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.773013  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.773040  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.773418  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.773972  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.774004  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.774124  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43581
	I0730 00:37:21.774491  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.774929  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.774950  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.775248  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.775440  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:21.777325  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:37:21.779280  516753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 00:37:21.780628  516753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:37:21.780644  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0730 00:37:21.780658  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:37:21.783472  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.783953  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:37:21.783987  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.784135  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:37:21.784291  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:37:21.784443  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:37:21.784649  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:37:21.795042  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37343
	I0730 00:37:21.795491  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.796046  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.796075  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.796438  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.796688  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:21.798525  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:37:21.798763  516753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0730 00:37:21.798782  516753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0730 00:37:21.798803  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:37:21.801238  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.801697  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:37:21.801725  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.801908  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:37:21.802086  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:37:21.802251  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:37:21.802411  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:37:21.899791  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0730 00:37:22.006706  516753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:37:22.058875  516753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0730 00:37:22.297829  516753 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0730 00:37:22.682018  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682048  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682119  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682145  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682352  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682369  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.682385  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682393  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682454  516753 main.go:141] libmachine: (ha-161305) DBG | Closing plugin on server side
	I0730 00:37:22.682500  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682521  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.682532  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682543  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682636  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682652  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.682652  516753 main.go:141] libmachine: (ha-161305) DBG | Closing plugin on server side
	I0730 00:37:22.682831  516753 main.go:141] libmachine: (ha-161305) DBG | Closing plugin on server side
	I0730 00:37:22.682901  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682919  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.683079  516753 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0730 00:37:22.683087  516753 round_trippers.go:469] Request Headers:
	I0730 00:37:22.683097  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:37:22.683102  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:37:22.696319  516753 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0730 00:37:22.696931  516753 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0730 00:37:22.696945  516753 round_trippers.go:469] Request Headers:
	I0730 00:37:22.696953  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:37:22.696957  516753 round_trippers.go:473]     Content-Type: application/json
	I0730 00:37:22.696961  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:37:22.699806  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:37:22.699972  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.699984  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.700328  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.700356  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.702109  516753 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0730 00:37:22.703311  516753 addons.go:510] duration metric: took 968.066182ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0730 00:37:22.703359  516753 start.go:246] waiting for cluster config update ...
	I0730 00:37:22.703379  516753 start.go:255] writing updated cluster config ...
	I0730 00:37:22.704828  516753 out.go:177] 
	I0730 00:37:22.706225  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:22.706298  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:37:22.707934  516753 out.go:177] * Starting "ha-161305-m02" control-plane node in "ha-161305" cluster
	I0730 00:37:22.709138  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:37:22.709166  516753 cache.go:56] Caching tarball of preloaded images
	I0730 00:37:22.709259  516753 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:37:22.709274  516753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:37:22.709362  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:37:22.709515  516753 start.go:360] acquireMachinesLock for ha-161305-m02: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:37:22.709562  516753 start.go:364] duration metric: took 25.739µs to acquireMachinesLock for "ha-161305-m02"
	I0730 00:37:22.709586  516753 start.go:93] Provisioning new machine with config: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:37:22.709656  516753 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0730 00:37:22.711233  516753 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 00:37:22.711332  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:22.711357  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:22.728619  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0730 00:37:22.729175  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:22.729796  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:22.729820  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:22.730213  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:22.730428  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:22.730581  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:22.730803  516753 start.go:159] libmachine.API.Create for "ha-161305" (driver="kvm2")
	I0730 00:37:22.730837  516753 client.go:168] LocalClient.Create starting
	I0730 00:37:22.730877  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:37:22.730919  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:37:22.730941  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:37:22.731011  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:37:22.731039  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:37:22.731063  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:37:22.731088  516753 main.go:141] libmachine: Running pre-create checks...
	I0730 00:37:22.731101  516753 main.go:141] libmachine: (ha-161305-m02) Calling .PreCreateCheck
	I0730 00:37:22.731285  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetConfigRaw
	I0730 00:37:22.731690  516753 main.go:141] libmachine: Creating machine...
	I0730 00:37:22.731706  516753 main.go:141] libmachine: (ha-161305-m02) Calling .Create
	I0730 00:37:22.731832  516753 main.go:141] libmachine: (ha-161305-m02) Creating KVM machine...
	I0730 00:37:22.732984  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found existing default KVM network
	I0730 00:37:22.733134  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found existing private KVM network mk-ha-161305
	I0730 00:37:22.733295  516753 main.go:141] libmachine: (ha-161305-m02) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02 ...
	I0730 00:37:22.733321  516753 main.go:141] libmachine: (ha-161305-m02) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:37:22.733391  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:22.733273  517154 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:37:22.733488  516753 main.go:141] libmachine: (ha-161305-m02) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:37:23.012758  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:23.012585  517154 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa...
	I0730 00:37:23.495090  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:23.494941  517154 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/ha-161305-m02.rawdisk...
	I0730 00:37:23.495124  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Writing magic tar header
	I0730 00:37:23.495140  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Writing SSH key tar header
	I0730 00:37:23.495148  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:23.495060  517154 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02 ...
	I0730 00:37:23.495160  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02
	I0730 00:37:23.495242  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02 (perms=drwx------)
	I0730 00:37:23.495265  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:37:23.495273  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:37:23.495282  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:37:23.495291  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:37:23.495300  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:37:23.495316  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:37:23.495323  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:37:23.495330  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:37:23.495339  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:37:23.495346  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home
	I0730 00:37:23.495354  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:37:23.495362  516753 main.go:141] libmachine: (ha-161305-m02) Creating domain...
	I0730 00:37:23.495372  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Skipping /home - not owner
	I0730 00:37:23.496352  516753 main.go:141] libmachine: (ha-161305-m02) define libvirt domain using xml: 
	I0730 00:37:23.496371  516753 main.go:141] libmachine: (ha-161305-m02) <domain type='kvm'>
	I0730 00:37:23.496378  516753 main.go:141] libmachine: (ha-161305-m02)   <name>ha-161305-m02</name>
	I0730 00:37:23.496384  516753 main.go:141] libmachine: (ha-161305-m02)   <memory unit='MiB'>2200</memory>
	I0730 00:37:23.496392  516753 main.go:141] libmachine: (ha-161305-m02)   <vcpu>2</vcpu>
	I0730 00:37:23.496398  516753 main.go:141] libmachine: (ha-161305-m02)   <features>
	I0730 00:37:23.496408  516753 main.go:141] libmachine: (ha-161305-m02)     <acpi/>
	I0730 00:37:23.496416  516753 main.go:141] libmachine: (ha-161305-m02)     <apic/>
	I0730 00:37:23.496422  516753 main.go:141] libmachine: (ha-161305-m02)     <pae/>
	I0730 00:37:23.496427  516753 main.go:141] libmachine: (ha-161305-m02)     
	I0730 00:37:23.496432  516753 main.go:141] libmachine: (ha-161305-m02)   </features>
	I0730 00:37:23.496440  516753 main.go:141] libmachine: (ha-161305-m02)   <cpu mode='host-passthrough'>
	I0730 00:37:23.496445  516753 main.go:141] libmachine: (ha-161305-m02)   
	I0730 00:37:23.496450  516753 main.go:141] libmachine: (ha-161305-m02)   </cpu>
	I0730 00:37:23.496470  516753 main.go:141] libmachine: (ha-161305-m02)   <os>
	I0730 00:37:23.496492  516753 main.go:141] libmachine: (ha-161305-m02)     <type>hvm</type>
	I0730 00:37:23.496503  516753 main.go:141] libmachine: (ha-161305-m02)     <boot dev='cdrom'/>
	I0730 00:37:23.496519  516753 main.go:141] libmachine: (ha-161305-m02)     <boot dev='hd'/>
	I0730 00:37:23.496534  516753 main.go:141] libmachine: (ha-161305-m02)     <bootmenu enable='no'/>
	I0730 00:37:23.496550  516753 main.go:141] libmachine: (ha-161305-m02)   </os>
	I0730 00:37:23.496558  516753 main.go:141] libmachine: (ha-161305-m02)   <devices>
	I0730 00:37:23.496563  516753 main.go:141] libmachine: (ha-161305-m02)     <disk type='file' device='cdrom'>
	I0730 00:37:23.496572  516753 main.go:141] libmachine: (ha-161305-m02)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/boot2docker.iso'/>
	I0730 00:37:23.496577  516753 main.go:141] libmachine: (ha-161305-m02)       <target dev='hdc' bus='scsi'/>
	I0730 00:37:23.496584  516753 main.go:141] libmachine: (ha-161305-m02)       <readonly/>
	I0730 00:37:23.496591  516753 main.go:141] libmachine: (ha-161305-m02)     </disk>
	I0730 00:37:23.496597  516753 main.go:141] libmachine: (ha-161305-m02)     <disk type='file' device='disk'>
	I0730 00:37:23.496606  516753 main.go:141] libmachine: (ha-161305-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:37:23.496614  516753 main.go:141] libmachine: (ha-161305-m02)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/ha-161305-m02.rawdisk'/>
	I0730 00:37:23.496621  516753 main.go:141] libmachine: (ha-161305-m02)       <target dev='hda' bus='virtio'/>
	I0730 00:37:23.496627  516753 main.go:141] libmachine: (ha-161305-m02)     </disk>
	I0730 00:37:23.496636  516753 main.go:141] libmachine: (ha-161305-m02)     <interface type='network'>
	I0730 00:37:23.496665  516753 main.go:141] libmachine: (ha-161305-m02)       <source network='mk-ha-161305'/>
	I0730 00:37:23.496685  516753 main.go:141] libmachine: (ha-161305-m02)       <model type='virtio'/>
	I0730 00:37:23.496697  516753 main.go:141] libmachine: (ha-161305-m02)     </interface>
	I0730 00:37:23.496717  516753 main.go:141] libmachine: (ha-161305-m02)     <interface type='network'>
	I0730 00:37:23.496726  516753 main.go:141] libmachine: (ha-161305-m02)       <source network='default'/>
	I0730 00:37:23.496739  516753 main.go:141] libmachine: (ha-161305-m02)       <model type='virtio'/>
	I0730 00:37:23.496754  516753 main.go:141] libmachine: (ha-161305-m02)     </interface>
	I0730 00:37:23.496770  516753 main.go:141] libmachine: (ha-161305-m02)     <serial type='pty'>
	I0730 00:37:23.496785  516753 main.go:141] libmachine: (ha-161305-m02)       <target port='0'/>
	I0730 00:37:23.496796  516753 main.go:141] libmachine: (ha-161305-m02)     </serial>
	I0730 00:37:23.496803  516753 main.go:141] libmachine: (ha-161305-m02)     <console type='pty'>
	I0730 00:37:23.496810  516753 main.go:141] libmachine: (ha-161305-m02)       <target type='serial' port='0'/>
	I0730 00:37:23.496817  516753 main.go:141] libmachine: (ha-161305-m02)     </console>
	I0730 00:37:23.496822  516753 main.go:141] libmachine: (ha-161305-m02)     <rng model='virtio'>
	I0730 00:37:23.496831  516753 main.go:141] libmachine: (ha-161305-m02)       <backend model='random'>/dev/random</backend>
	I0730 00:37:23.496839  516753 main.go:141] libmachine: (ha-161305-m02)     </rng>
	I0730 00:37:23.496843  516753 main.go:141] libmachine: (ha-161305-m02)     
	I0730 00:37:23.496849  516753 main.go:141] libmachine: (ha-161305-m02)     
	I0730 00:37:23.496853  516753 main.go:141] libmachine: (ha-161305-m02)   </devices>
	I0730 00:37:23.496867  516753 main.go:141] libmachine: (ha-161305-m02) </domain>
	I0730 00:37:23.496881  516753 main.go:141] libmachine: (ha-161305-m02) 
	I0730 00:37:23.503402  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:8a:3b:66 in network default
	I0730 00:37:23.503981  516753 main.go:141] libmachine: (ha-161305-m02) Ensuring networks are active...
	I0730 00:37:23.504028  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:23.504678  516753 main.go:141] libmachine: (ha-161305-m02) Ensuring network default is active
	I0730 00:37:23.504984  516753 main.go:141] libmachine: (ha-161305-m02) Ensuring network mk-ha-161305 is active
	I0730 00:37:23.505412  516753 main.go:141] libmachine: (ha-161305-m02) Getting domain xml...
	I0730 00:37:23.506140  516753 main.go:141] libmachine: (ha-161305-m02) Creating domain...
	I0730 00:37:24.738496  516753 main.go:141] libmachine: (ha-161305-m02) Waiting to get IP...
	I0730 00:37:24.739543  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:24.739982  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:24.740011  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:24.739950  517154 retry.go:31] will retry after 240.507777ms: waiting for machine to come up
	I0730 00:37:24.982455  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:24.982949  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:24.982984  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:24.982882  517154 retry.go:31] will retry after 343.734606ms: waiting for machine to come up
	I0730 00:37:25.328448  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:25.328889  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:25.328916  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:25.328857  517154 retry.go:31] will retry after 407.015391ms: waiting for machine to come up
	I0730 00:37:25.737479  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:25.737934  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:25.737985  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:25.737877  517154 retry.go:31] will retry after 553.281612ms: waiting for machine to come up
	I0730 00:37:26.292463  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:26.292914  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:26.292954  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:26.292862  517154 retry.go:31] will retry after 525.274717ms: waiting for machine to come up
	I0730 00:37:26.819274  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:26.819682  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:26.819706  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:26.819621  517154 retry.go:31] will retry after 719.917184ms: waiting for machine to come up
	I0730 00:37:27.541499  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:27.541949  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:27.541988  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:27.541887  517154 retry.go:31] will retry after 759.939347ms: waiting for machine to come up
	I0730 00:37:28.303096  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:28.303451  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:28.303483  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:28.303403  517154 retry.go:31] will retry after 988.04931ms: waiting for machine to come up
	I0730 00:37:29.292885  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:29.293365  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:29.293579  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:29.293295  517154 retry.go:31] will retry after 1.192367296s: waiting for machine to come up
	I0730 00:37:30.486839  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:30.487223  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:30.487280  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:30.487167  517154 retry.go:31] will retry after 1.500364555s: waiting for machine to come up
	I0730 00:37:31.990084  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:31.990732  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:31.990763  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:31.990679  517154 retry.go:31] will retry after 2.339994382s: waiting for machine to come up
	I0730 00:37:34.332879  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:34.333348  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:34.333375  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:34.333309  517154 retry.go:31] will retry after 2.725807557s: waiting for machine to come up
	I0730 00:37:37.061917  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:37.062512  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:37.062543  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:37.062478  517154 retry.go:31] will retry after 3.140725454s: waiting for machine to come up
	I0730 00:37:40.205929  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:40.206301  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:40.206632  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:40.206544  517154 retry.go:31] will retry after 4.983106137s: waiting for machine to come up
	I0730 00:37:45.191468  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.192006  516753 main.go:141] libmachine: (ha-161305-m02) Found IP for machine: 192.168.39.126
	I0730 00:37:45.192034  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has current primary IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.192048  516753 main.go:141] libmachine: (ha-161305-m02) Reserving static IP address...
	I0730 00:37:45.192619  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find host DHCP lease matching {name: "ha-161305-m02", mac: "52:54:00:44:e3:c9", ip: "192.168.39.126"} in network mk-ha-161305
	I0730 00:37:45.265169  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Getting to WaitForSSH function...
	I0730 00:37:45.265195  516753 main.go:141] libmachine: (ha-161305-m02) Reserved static IP address: 192.168.39.126
	I0730 00:37:45.265208  516753 main.go:141] libmachine: (ha-161305-m02) Waiting for SSH to be available...
	I0730 00:37:45.267760  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.268211  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.268241  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.268480  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Using SSH client type: external
	I0730 00:37:45.268509  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa (-rw-------)
	I0730 00:37:45.268541  516753 main.go:141] libmachine: (ha-161305-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:37:45.268556  516753 main.go:141] libmachine: (ha-161305-m02) DBG | About to run SSH command:
	I0730 00:37:45.268570  516753 main.go:141] libmachine: (ha-161305-m02) DBG | exit 0
	I0730 00:37:45.396779  516753 main.go:141] libmachine: (ha-161305-m02) DBG | SSH cmd err, output: <nil>: 
	I0730 00:37:45.397063  516753 main.go:141] libmachine: (ha-161305-m02) KVM machine creation complete!
	I0730 00:37:45.397374  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetConfigRaw
	I0730 00:37:45.397994  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:45.398219  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:45.398429  516753 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:37:45.398459  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:37:45.399869  516753 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:37:45.399884  516753 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:37:45.399889  516753 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:37:45.399895  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.402275  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.402631  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.402650  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.402780  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.402950  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.403102  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.403242  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.403425  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.403683  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.403699  516753 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:37:45.511797  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:37:45.511819  516753 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:37:45.511827  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.514704  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.515077  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.515112  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.515270  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.515455  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.515651  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.515787  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.515965  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.516162  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.516174  516753 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:37:45.625352  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:37:45.625457  516753 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:37:45.625468  516753 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:37:45.625479  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:45.625801  516753 buildroot.go:166] provisioning hostname "ha-161305-m02"
	I0730 00:37:45.625845  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:45.626078  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.628630  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.629030  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.629059  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.629188  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.629385  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.629597  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.629823  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.630025  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.630232  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.630246  516753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305-m02 && echo "ha-161305-m02" | sudo tee /etc/hostname
	I0730 00:37:45.755899  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305-m02
	
	I0730 00:37:45.755928  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.758701  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.758989  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.759023  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.759147  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.759370  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.759539  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.759676  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.759855  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.760059  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.760077  516753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:37:45.880889  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:37:45.880927  516753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:37:45.880950  516753 buildroot.go:174] setting up certificates
	I0730 00:37:45.880961  516753 provision.go:84] configureAuth start
	I0730 00:37:45.880973  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:45.881272  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:45.883737  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.884115  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.884143  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.884270  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.886533  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.886893  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.886926  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.887058  516753 provision.go:143] copyHostCerts
	I0730 00:37:45.887095  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:37:45.887140  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:37:45.887152  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:37:45.887242  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:37:45.887340  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:37:45.887359  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:37:45.887366  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:37:45.887395  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:37:45.887441  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:37:45.887457  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:37:45.887463  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:37:45.887484  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:37:45.887542  516753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305-m02 san=[127.0.0.1 192.168.39.126 ha-161305-m02 localhost minikube]
	I0730 00:37:45.945115  516753 provision.go:177] copyRemoteCerts
	I0730 00:37:45.945183  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:37:45.945210  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.947826  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.948207  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.948245  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.948393  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.948578  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.948729  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.948853  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.034791  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:37:46.034862  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:37:46.060900  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:37:46.060990  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 00:37:46.086451  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:37:46.086529  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0730 00:37:46.111836  516753 provision.go:87] duration metric: took 230.859762ms to configureAuth
	I0730 00:37:46.111864  516753 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:37:46.112058  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:46.112154  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.115151  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.115532  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.115561  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.115780  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.116013  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.116276  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.116459  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.116668  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:46.116899  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:46.116916  516753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:37:46.384040  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:37:46.384073  516753 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:37:46.384081  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetURL
	I0730 00:37:46.385507  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Using libvirt version 6000000
	I0730 00:37:46.387687  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.388076  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.388101  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.388320  516753 main.go:141] libmachine: Docker is up and running!
	I0730 00:37:46.388337  516753 main.go:141] libmachine: Reticulating splines...
	I0730 00:37:46.388347  516753 client.go:171] duration metric: took 23.657500004s to LocalClient.Create
	I0730 00:37:46.388377  516753 start.go:167] duration metric: took 23.657600459s to libmachine.API.Create "ha-161305"
	I0730 00:37:46.388389  516753 start.go:293] postStartSetup for "ha-161305-m02" (driver="kvm2")
	I0730 00:37:46.388402  516753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:37:46.388424  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.388715  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:37:46.388741  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.391189  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.391580  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.391608  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.391782  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.391983  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.392173  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.392327  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.478242  516753 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:37:46.482085  516753 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:37:46.482110  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:37:46.482179  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:37:46.482248  516753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:37:46.482258  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:37:46.482336  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:37:46.490894  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:37:46.512068  516753 start.go:296] duration metric: took 123.663993ms for postStartSetup
	I0730 00:37:46.512118  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetConfigRaw
	I0730 00:37:46.512763  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:46.515301  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.515641  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.515673  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.515889  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:37:46.516123  516753 start.go:128] duration metric: took 23.806454125s to createHost
	I0730 00:37:46.516151  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.518357  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.518644  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.518673  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.518814  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.519004  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.519177  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.519314  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.519496  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:46.519659  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:46.519668  516753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:37:46.629163  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722299866.607970383
	
	I0730 00:37:46.629189  516753 fix.go:216] guest clock: 1722299866.607970383
	I0730 00:37:46.629197  516753 fix.go:229] Guest: 2024-07-30 00:37:46.607970383 +0000 UTC Remote: 2024-07-30 00:37:46.516138998 +0000 UTC m=+77.885961689 (delta=91.831385ms)
	I0730 00:37:46.629214  516753 fix.go:200] guest clock delta is within tolerance: 91.831385ms
	I0730 00:37:46.629219  516753 start.go:83] releasing machines lock for "ha-161305-m02", held for 23.919646347s
	I0730 00:37:46.629241  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.629569  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:46.632152  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.632483  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.632511  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.634971  516753 out.go:177] * Found network options:
	I0730 00:37:46.636255  516753 out.go:177]   - NO_PROXY=192.168.39.80
	W0730 00:37:46.637476  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:37:46.637506  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.638017  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.638219  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.638307  516753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:37:46.638362  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	W0730 00:37:46.638436  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:37:46.638499  516753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:37:46.638515  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.640789  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641141  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.641170  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641189  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641264  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.641462  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.641619  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.641632  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.641655  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641740  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.641976  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.642134  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.642309  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.642479  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.883173  516753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:37:46.888907  516753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:37:46.888970  516753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:37:46.904225  516753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:37:46.904255  516753 start.go:495] detecting cgroup driver to use...
	I0730 00:37:46.904346  516753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:37:46.919641  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:37:46.932861  516753 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:37:46.932930  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:37:46.946141  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:37:46.959737  516753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:37:47.076469  516753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:37:47.241858  516753 docker.go:233] disabling docker service ...
	I0730 00:37:47.241925  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:37:47.258144  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:37:47.271355  516753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:37:47.396700  516753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:37:47.511681  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:37:47.525833  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:37:47.542979  516753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:37:47.543058  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.553712  516753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:37:47.553784  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.563932  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.573482  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.583372  516753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:37:47.593240  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.602697  516753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.618421  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.628078  516753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:37:47.637033  516753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:37:47.637090  516753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:37:47.649603  516753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:37:47.659006  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:37:47.776747  516753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:37:47.910467  516753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:37:47.910554  516753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:37:47.915148  516753 start.go:563] Will wait 60s for crictl version
	I0730 00:37:47.915220  516753 ssh_runner.go:195] Run: which crictl
	I0730 00:37:47.918871  516753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:37:47.955620  516753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:37:47.955720  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:37:47.982020  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:37:48.010734  516753 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:37:48.012141  516753 out.go:177]   - env NO_PROXY=192.168.39.80
	I0730 00:37:48.013340  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:48.016450  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:48.016854  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:48.016879  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:48.017165  516753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:37:48.020973  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:37:48.033388  516753 mustload.go:65] Loading cluster: ha-161305
	I0730 00:37:48.033619  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:48.033881  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:48.033921  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:48.049782  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0730 00:37:48.050263  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:48.050728  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:48.050754  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:48.051129  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:48.051377  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:48.052993  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:37:48.053326  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:48.053368  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:48.068216  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0730 00:37:48.068647  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:48.069196  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:48.069221  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:48.069539  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:48.069759  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:37:48.069905  516753 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.126
	I0730 00:37:48.069918  516753 certs.go:194] generating shared ca certs ...
	I0730 00:37:48.069938  516753 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:48.070105  516753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:37:48.070152  516753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:37:48.070167  516753 certs.go:256] generating profile certs ...
	I0730 00:37:48.070270  516753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:37:48.070304  516753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8
	I0730 00:37:48.070326  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.254]
	I0730 00:37:48.264363  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8 ...
	I0730 00:37:48.264393  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8: {Name:mk33991990a82d48e58b66a07fc4d399aa40ab4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:48.264605  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8 ...
	I0730 00:37:48.264627  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8: {Name:mk2fbb9322662bb735800bbd51301531f9faa956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:48.264752  516753 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:37:48.264928  516753 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:37:48.265125  516753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:37:48.265144  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:37:48.265163  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:37:48.265185  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:37:48.265202  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:37:48.265220  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:37:48.265236  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:37:48.265255  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:37:48.265277  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:37:48.265342  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:37:48.265388  516753 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:37:48.265404  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:37:48.265439  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:37:48.265470  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:37:48.265502  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:37:48.265556  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:37:48.265591  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.265610  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.265631  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.265676  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:37:48.268648  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:48.269155  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:37:48.269189  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:48.269375  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:37:48.269577  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:37:48.269733  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:37:48.269858  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:37:48.345124  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0730 00:37:48.349937  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0730 00:37:48.364526  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0730 00:37:48.371053  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0730 00:37:48.381684  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0730 00:37:48.385967  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0730 00:37:48.396039  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0730 00:37:48.399905  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0730 00:37:48.415622  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0730 00:37:48.419701  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0730 00:37:48.429555  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0730 00:37:48.433651  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0730 00:37:48.442848  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:37:48.466298  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:37:48.488792  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:37:48.510960  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:37:48.532855  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0730 00:37:48.555244  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:37:48.577909  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:37:48.599778  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:37:48.621320  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:37:48.644196  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:37:48.665794  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:37:48.687936  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0730 00:37:48.703150  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0730 00:37:48.718210  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0730 00:37:48.733165  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0730 00:37:48.748526  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0730 00:37:48.763541  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0730 00:37:48.778980  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0730 00:37:48.794361  516753 ssh_runner.go:195] Run: openssl version
	I0730 00:37:48.800358  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:37:48.810540  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.814929  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.815002  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.820814  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:37:48.831108  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:37:48.841250  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.845266  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.845330  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.850667  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:37:48.860636  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:37:48.871385  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.875694  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.875774  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.881549  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:37:48.891947  516753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:37:48.896014  516753 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:37:48.896067  516753 kubeadm.go:934] updating node {m02 192.168.39.126 8443 v1.30.3 crio true true} ...
	I0730 00:37:48.896180  516753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:37:48.896213  516753 kube-vip.go:115] generating kube-vip config ...
	I0730 00:37:48.896248  516753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:37:48.914384  516753 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:37:48.914459  516753 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:37:48.914512  516753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:37:48.924356  516753 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0730 00:37:48.924415  516753 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0730 00:37:48.933719  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0730 00:37:48.933750  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:37:48.933799  516753 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0730 00:37:48.933830  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:37:48.933829  516753 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0730 00:37:48.938214  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0730 00:37:48.938241  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0730 00:37:50.201661  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:37:50.201754  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:37:50.206398  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0730 00:37:50.206432  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0730 00:38:00.365092  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:38:00.379359  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:38:00.379482  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:38:00.383648  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0730 00:38:00.383682  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0730 00:38:00.760191  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0730 00:38:00.769469  516753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0730 00:38:00.784857  516753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:38:00.800208  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:38:00.815904  516753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:38:00.819814  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:38:00.831159  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:38:00.936912  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:38:00.953783  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:38:00.954274  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:38:00.954344  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:38:00.970596  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39843
	I0730 00:38:00.971114  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:38:00.971590  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:38:00.971615  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:38:00.971950  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:38:00.972146  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:38:00.972333  516753 start.go:317] joinCluster: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:38:00.972476  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0730 00:38:00.972496  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:38:00.975638  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:38:00.976172  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:38:00.976205  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:38:00.976381  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:38:00.976565  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:38:00.976728  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:38:00.976868  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:38:01.132634  516753 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:38:01.132688  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rrahj6.2pnfdyo0jftsl9jl --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m02 --control-plane --apiserver-advertise-address=192.168.39.126 --apiserver-bind-port=8443"
	I0730 00:38:22.418746  516753 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rrahj6.2pnfdyo0jftsl9jl --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m02 --control-plane --apiserver-advertise-address=192.168.39.126 --apiserver-bind-port=8443": (21.286013651s)
	I0730 00:38:22.418787  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0730 00:38:22.917683  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-161305-m02 minikube.k8s.io/updated_at=2024_07_30T00_38_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=ha-161305 minikube.k8s.io/primary=false
	I0730 00:38:23.062014  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-161305-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0730 00:38:23.209584  516753 start.go:319] duration metric: took 22.237244485s to joinCluster
	I0730 00:38:23.209680  516753 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:38:23.210031  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:38:23.211642  516753 out.go:177] * Verifying Kubernetes components...
	I0730 00:38:23.213608  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:38:23.456752  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:38:23.500437  516753 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:38:23.500816  516753 kapi.go:59] client config for ha-161305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0730 00:38:23.500908  516753 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.80:8443
	I0730 00:38:23.501191  516753 node_ready.go:35] waiting up to 6m0s for node "ha-161305-m02" to be "Ready" ...
	I0730 00:38:23.501312  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:23.501323  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:23.501334  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:23.501339  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:23.512730  516753 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0730 00:38:24.002037  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:24.002068  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:24.002079  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:24.002085  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:24.006031  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:24.501999  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:24.502031  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:24.502044  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:24.502054  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:24.505529  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:25.001766  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:25.001800  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:25.001809  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:25.001823  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:25.004247  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:25.501994  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:25.502032  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:25.502040  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:25.502045  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:25.504991  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:25.505508  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:26.002000  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:26.002026  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:26.002037  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:26.002042  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:26.005495  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:26.501953  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:26.501977  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:26.501989  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:26.501997  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:26.504628  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:27.002204  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:27.002229  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:27.002238  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:27.002242  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:27.005498  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:27.502259  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:27.502282  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:27.502294  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:27.502307  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:27.506707  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:27.507272  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:28.001741  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:28.001770  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:28.001781  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:28.001786  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:28.004728  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:28.501501  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:28.501528  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:28.501541  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:28.501547  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:28.509465  516753 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0730 00:38:29.001523  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:29.001549  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:29.001559  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:29.001564  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:29.004868  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:29.501939  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:29.501962  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:29.501973  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:29.501978  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:29.505207  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:30.001980  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:30.002005  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:30.002016  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:30.002022  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:30.011446  516753 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0730 00:38:30.012222  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:30.501482  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:30.501505  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:30.501513  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:30.501518  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:30.505183  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:31.001829  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:31.001854  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:31.001863  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:31.001867  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:31.005255  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:31.502256  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:31.502290  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:31.502298  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:31.502302  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:31.505718  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:32.001855  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:32.001882  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:32.001890  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:32.001893  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:32.004578  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:32.501604  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:32.501628  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:32.501636  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:32.501640  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:32.506132  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:32.507080  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:33.001978  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:33.002005  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:33.002017  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:33.002025  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:33.004870  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:33.501714  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:33.501740  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:33.501751  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:33.501758  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:33.505143  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:34.001577  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:34.001600  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:34.001608  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:34.001612  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:34.004836  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:34.501626  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:34.501649  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:34.501658  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:34.501662  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:34.504935  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:35.001777  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:35.001802  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:35.001810  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:35.001815  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:35.005102  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:35.005677  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:35.502200  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:35.502229  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:35.502237  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:35.502242  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:35.505721  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:36.001935  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:36.001958  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:36.001967  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:36.001973  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:36.004951  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:36.501883  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:36.501909  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:36.501919  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:36.501923  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:36.504933  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:37.001968  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:37.001991  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:37.002000  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:37.002005  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:37.005496  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:37.006104  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:37.501504  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:37.501532  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:37.501544  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:37.501552  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:37.509457  516753 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0730 00:38:38.001841  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:38.001871  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:38.001883  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:38.001890  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:38.004961  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:38.502203  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:38.502229  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:38.502241  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:38.502245  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:38.505616  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:39.001465  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:39.001492  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:39.001504  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:39.001509  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:39.004565  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:39.501984  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:39.502008  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:39.502017  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:39.502022  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:39.506144  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:39.506677  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:40.001989  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:40.002014  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:40.002023  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:40.002028  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:40.005478  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:40.501415  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:40.501443  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:40.501451  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:40.501456  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:40.504719  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.001420  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.001446  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.001454  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.001457  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.004900  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.005728  516753 node_ready.go:49] node "ha-161305-m02" has status "Ready":"True"
	I0730 00:38:41.005750  516753 node_ready.go:38] duration metric: took 17.504538043s for node "ha-161305-m02" to be "Ready" ...
	I0730 00:38:41.005761  516753 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:38:41.005842  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:41.005851  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.005859  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.005864  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.011197  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:38:41.017726  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.017834  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bdpds
	I0730 00:38:41.017846  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.017857  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.017866  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.020518  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.021113  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.021133  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.021144  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.021152  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.023445  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.023923  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.023943  516753 pod_ready.go:81] duration metric: took 6.186327ms for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.023954  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.024027  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mzcln
	I0730 00:38:41.024037  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.024056  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.024062  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.026332  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.026862  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.026877  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.026884  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.026888  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.029264  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.029611  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.029628  516753 pod_ready.go:81] duration metric: took 5.666334ms for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.029636  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.029682  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305
	I0730 00:38:41.029689  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.029695  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.029700  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.031918  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.032497  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.032511  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.032516  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.032520  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.034666  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.035250  516753 pod_ready.go:92] pod "etcd-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.035266  516753 pod_ready.go:81] duration metric: took 5.624064ms for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.035273  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.035321  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m02
	I0730 00:38:41.035336  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.035343  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.035351  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.037615  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.038037  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.038050  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.038057  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.038061  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.040015  516753 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0730 00:38:41.040500  516753 pod_ready.go:92] pod "etcd-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.040515  516753 pod_ready.go:81] duration metric: took 5.236235ms for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.040531  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.201917  516753 request.go:629] Waited for 161.295825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:38:41.201992  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:38:41.202000  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.202012  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.202021  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.205243  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.402216  516753 request.go:629] Waited for 196.372053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.402316  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.402333  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.402346  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.402357  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.405528  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.406031  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.406052  516753 pod_ready.go:81] duration metric: took 365.510849ms for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.406062  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.602228  516753 request.go:629] Waited for 196.071289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:38:41.602302  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:38:41.602307  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.602315  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.602318  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.606019  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.802184  516753 request.go:629] Waited for 195.17089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.802258  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.802263  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.802272  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.802277  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.806358  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:41.807015  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.807034  516753 pod_ready.go:81] duration metric: took 400.962679ms for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.807044  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.002140  516753 request.go:629] Waited for 195.026927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:38:42.002207  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:38:42.002212  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.002220  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.002224  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.005889  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.201895  516753 request.go:629] Waited for 195.278311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:42.201969  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:42.201976  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.201987  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.201997  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.205486  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.205952  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:42.205978  516753 pod_ready.go:81] duration metric: took 398.925824ms for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.205993  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.402065  516753 request.go:629] Waited for 195.954248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:38:42.402136  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:38:42.402142  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.402149  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.402153  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.405638  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.601809  516753 request.go:629] Waited for 195.422281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:42.601914  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:42.601927  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.601938  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.601948  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.605364  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.606017  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:42.606041  516753 pod_ready.go:81] duration metric: took 400.038029ms for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.606056  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.801620  516753 request.go:629] Waited for 195.4652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:38:42.801683  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:38:42.801688  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.801695  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.801702  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.805510  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.001427  516753 request.go:629] Waited for 195.290569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:43.001505  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:43.001513  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.001521  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.001544  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.004506  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:43.004992  516753 pod_ready.go:92] pod "kube-proxy-pqr2f" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:43.005016  516753 pod_ready.go:81] duration metric: took 398.948113ms for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.005032  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.202074  516753 request.go:629] Waited for 196.947057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:38:43.202148  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:38:43.202158  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.202170  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.202178  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.205936  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.402047  516753 request.go:629] Waited for 195.413267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.402121  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.402128  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.402139  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.402149  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.405264  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.405841  516753 pod_ready.go:92] pod "kube-proxy-wptvn" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:43.405862  516753 pod_ready.go:81] duration metric: took 400.816309ms for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.405872  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.602026  516753 request.go:629] Waited for 196.080796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:38:43.602120  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:38:43.602130  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.602144  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.602153  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.605247  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.801655  516753 request.go:629] Waited for 195.834831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.801738  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.801750  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.801762  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.801773  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.805279  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.805732  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:43.805750  516753 pod_ready.go:81] duration metric: took 399.871741ms for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.805760  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:44.001902  516753 request.go:629] Waited for 196.042949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:38:44.002008  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:38:44.002017  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.002027  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.002032  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.005331  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:44.202261  516753 request.go:629] Waited for 196.386792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:44.202331  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:44.202337  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.202344  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.202349  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.204873  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:44.205415  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:44.205444  516753 pod_ready.go:81] duration metric: took 399.675361ms for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:44.205456  516753 pod_ready.go:38] duration metric: took 3.199683199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:38:44.205471  516753 api_server.go:52] waiting for apiserver process to appear ...
	I0730 00:38:44.205531  516753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:38:44.222886  516753 api_server.go:72] duration metric: took 21.013159331s to wait for apiserver process to appear ...
	I0730 00:38:44.222912  516753 api_server.go:88] waiting for apiserver healthz status ...
	I0730 00:38:44.222932  516753 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0730 00:38:44.227033  516753 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0730 00:38:44.227134  516753 round_trippers.go:463] GET https://192.168.39.80:8443/version
	I0730 00:38:44.227147  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.227158  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.227167  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.227905  516753 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0730 00:38:44.228004  516753 api_server.go:141] control plane version: v1.30.3
	I0730 00:38:44.228021  516753 api_server.go:131] duration metric: took 5.102431ms to wait for apiserver health ...
	I0730 00:38:44.228029  516753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 00:38:44.402481  516753 request.go:629] Waited for 174.34802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.402543  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.402549  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.402566  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.402574  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.410169  516753 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0730 00:38:44.415952  516753 system_pods.go:59] 17 kube-system pods found
	I0730 00:38:44.416000  516753 system_pods.go:61] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:38:44.416008  516753 system_pods.go:61] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:38:44.416012  516753 system_pods.go:61] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:38:44.416016  516753 system_pods.go:61] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:38:44.416020  516753 system_pods.go:61] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:38:44.416024  516753 system_pods.go:61] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:38:44.416029  516753 system_pods.go:61] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:38:44.416044  516753 system_pods.go:61] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:38:44.416050  516753 system_pods.go:61] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:38:44.416060  516753 system_pods.go:61] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:38:44.416065  516753 system_pods.go:61] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:38:44.416067  516753 system_pods.go:61] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:38:44.416071  516753 system_pods.go:61] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:38:44.416075  516753 system_pods.go:61] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:38:44.416080  516753 system_pods.go:61] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:38:44.416083  516753 system_pods.go:61] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:38:44.416089  516753 system_pods.go:61] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:38:44.416096  516753 system_pods.go:74] duration metric: took 188.053859ms to wait for pod list to return data ...
	I0730 00:38:44.416107  516753 default_sa.go:34] waiting for default service account to be created ...
	I0730 00:38:44.601552  516753 request.go:629] Waited for 185.33914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:38:44.601625  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:38:44.601631  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.601639  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.601647  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.604843  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:44.605108  516753 default_sa.go:45] found service account: "default"
	I0730 00:38:44.605129  516753 default_sa.go:55] duration metric: took 189.010974ms for default service account to be created ...
	I0730 00:38:44.605139  516753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 00:38:44.801526  516753 request.go:629] Waited for 196.303267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.801618  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.801624  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.801631  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.801636  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.806715  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:38:44.810529  516753 system_pods.go:86] 17 kube-system pods found
	I0730 00:38:44.810555  516753 system_pods.go:89] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:38:44.810561  516753 system_pods.go:89] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:38:44.810565  516753 system_pods.go:89] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:38:44.810570  516753 system_pods.go:89] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:38:44.810574  516753 system_pods.go:89] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:38:44.810578  516753 system_pods.go:89] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:38:44.810585  516753 system_pods.go:89] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:38:44.810589  516753 system_pods.go:89] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:38:44.810596  516753 system_pods.go:89] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:38:44.810600  516753 system_pods.go:89] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:38:44.810607  516753 system_pods.go:89] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:38:44.810610  516753 system_pods.go:89] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:38:44.810614  516753 system_pods.go:89] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:38:44.810619  516753 system_pods.go:89] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:38:44.810623  516753 system_pods.go:89] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:38:44.810627  516753 system_pods.go:89] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:38:44.810630  516753 system_pods.go:89] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:38:44.810637  516753 system_pods.go:126] duration metric: took 205.489759ms to wait for k8s-apps to be running ...
	I0730 00:38:44.810660  516753 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 00:38:44.810712  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:38:44.823949  516753 system_svc.go:56] duration metric: took 13.278644ms WaitForService to wait for kubelet
	I0730 00:38:44.823982  516753 kubeadm.go:582] duration metric: took 21.614261776s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:38:44.824007  516753 node_conditions.go:102] verifying NodePressure condition ...
	I0730 00:38:45.002457  516753 request.go:629] Waited for 178.352962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes
	I0730 00:38:45.002519  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes
	I0730 00:38:45.002524  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:45.002532  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:45.002540  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:45.006051  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:45.006821  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:38:45.006845  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:38:45.006857  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:38:45.006861  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:38:45.006867  516753 node_conditions.go:105] duration metric: took 182.855378ms to run NodePressure ...
	I0730 00:38:45.006882  516753 start.go:241] waiting for startup goroutines ...
	I0730 00:38:45.006908  516753 start.go:255] writing updated cluster config ...
	I0730 00:38:45.009162  516753 out.go:177] 
	I0730 00:38:45.010675  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:38:45.010761  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:38:45.012437  516753 out.go:177] * Starting "ha-161305-m03" control-plane node in "ha-161305" cluster
	I0730 00:38:45.013676  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:38:45.013705  516753 cache.go:56] Caching tarball of preloaded images
	I0730 00:38:45.013831  516753 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:38:45.013845  516753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:38:45.013955  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:38:45.014155  516753 start.go:360] acquireMachinesLock for ha-161305-m03: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:38:45.014211  516753 start.go:364] duration metric: took 33.65µs to acquireMachinesLock for "ha-161305-m03"
	I0730 00:38:45.014237  516753 start.go:93] Provisioning new machine with config: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:38:45.014356  516753 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0730 00:38:45.015921  516753 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 00:38:45.016012  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:38:45.016057  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:38:45.031210  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0730 00:38:45.031641  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:38:45.032115  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:38:45.032137  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:38:45.032535  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:38:45.032769  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:38:45.033003  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:38:45.033265  516753 start.go:159] libmachine.API.Create for "ha-161305" (driver="kvm2")
	I0730 00:38:45.033307  516753 client.go:168] LocalClient.Create starting
	I0730 00:38:45.033349  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:38:45.033389  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:38:45.033405  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:38:45.033462  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:38:45.033480  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:38:45.033491  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:38:45.033507  516753 main.go:141] libmachine: Running pre-create checks...
	I0730 00:38:45.033515  516753 main.go:141] libmachine: (ha-161305-m03) Calling .PreCreateCheck
	I0730 00:38:45.033717  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetConfigRaw
	I0730 00:38:45.034134  516753 main.go:141] libmachine: Creating machine...
	I0730 00:38:45.034146  516753 main.go:141] libmachine: (ha-161305-m03) Calling .Create
	I0730 00:38:45.034286  516753 main.go:141] libmachine: (ha-161305-m03) Creating KVM machine...
	I0730 00:38:45.035837  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found existing default KVM network
	I0730 00:38:45.036001  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found existing private KVM network mk-ha-161305
	I0730 00:38:45.036142  516753 main.go:141] libmachine: (ha-161305-m03) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03 ...
	I0730 00:38:45.036167  516753 main.go:141] libmachine: (ha-161305-m03) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:38:45.036211  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.036113  517582 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:38:45.036301  516753 main.go:141] libmachine: (ha-161305-m03) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:38:45.304450  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.304320  517582 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa...
	I0730 00:38:45.384479  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.384323  517582 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/ha-161305-m03.rawdisk...
	I0730 00:38:45.384520  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Writing magic tar header
	I0730 00:38:45.384540  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Writing SSH key tar header
	I0730 00:38:45.384552  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.384447  517582 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03 ...
	I0730 00:38:45.384568  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03
	I0730 00:38:45.384646  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:38:45.384673  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03 (perms=drwx------)
	I0730 00:38:45.384682  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:38:45.384730  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:38:45.384758  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:38:45.384769  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:38:45.384781  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:38:45.384790  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:38:45.384819  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:38:45.384845  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:38:45.384857  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:38:45.384871  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home
	I0730 00:38:45.384883  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Skipping /home - not owner
	I0730 00:38:45.384901  516753 main.go:141] libmachine: (ha-161305-m03) Creating domain...
	I0730 00:38:45.385805  516753 main.go:141] libmachine: (ha-161305-m03) define libvirt domain using xml: 
	I0730 00:38:45.385825  516753 main.go:141] libmachine: (ha-161305-m03) <domain type='kvm'>
	I0730 00:38:45.385833  516753 main.go:141] libmachine: (ha-161305-m03)   <name>ha-161305-m03</name>
	I0730 00:38:45.385841  516753 main.go:141] libmachine: (ha-161305-m03)   <memory unit='MiB'>2200</memory>
	I0730 00:38:45.385846  516753 main.go:141] libmachine: (ha-161305-m03)   <vcpu>2</vcpu>
	I0730 00:38:45.385854  516753 main.go:141] libmachine: (ha-161305-m03)   <features>
	I0730 00:38:45.385870  516753 main.go:141] libmachine: (ha-161305-m03)     <acpi/>
	I0730 00:38:45.385880  516753 main.go:141] libmachine: (ha-161305-m03)     <apic/>
	I0730 00:38:45.385888  516753 main.go:141] libmachine: (ha-161305-m03)     <pae/>
	I0730 00:38:45.385895  516753 main.go:141] libmachine: (ha-161305-m03)     
	I0730 00:38:45.385907  516753 main.go:141] libmachine: (ha-161305-m03)   </features>
	I0730 00:38:45.385916  516753 main.go:141] libmachine: (ha-161305-m03)   <cpu mode='host-passthrough'>
	I0730 00:38:45.385921  516753 main.go:141] libmachine: (ha-161305-m03)   
	I0730 00:38:45.385927  516753 main.go:141] libmachine: (ha-161305-m03)   </cpu>
	I0730 00:38:45.385950  516753 main.go:141] libmachine: (ha-161305-m03)   <os>
	I0730 00:38:45.385974  516753 main.go:141] libmachine: (ha-161305-m03)     <type>hvm</type>
	I0730 00:38:45.385988  516753 main.go:141] libmachine: (ha-161305-m03)     <boot dev='cdrom'/>
	I0730 00:38:45.385999  516753 main.go:141] libmachine: (ha-161305-m03)     <boot dev='hd'/>
	I0730 00:38:45.386010  516753 main.go:141] libmachine: (ha-161305-m03)     <bootmenu enable='no'/>
	I0730 00:38:45.386020  516753 main.go:141] libmachine: (ha-161305-m03)   </os>
	I0730 00:38:45.386030  516753 main.go:141] libmachine: (ha-161305-m03)   <devices>
	I0730 00:38:45.386038  516753 main.go:141] libmachine: (ha-161305-m03)     <disk type='file' device='cdrom'>
	I0730 00:38:45.386072  516753 main.go:141] libmachine: (ha-161305-m03)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/boot2docker.iso'/>
	I0730 00:38:45.386097  516753 main.go:141] libmachine: (ha-161305-m03)       <target dev='hdc' bus='scsi'/>
	I0730 00:38:45.386108  516753 main.go:141] libmachine: (ha-161305-m03)       <readonly/>
	I0730 00:38:45.386119  516753 main.go:141] libmachine: (ha-161305-m03)     </disk>
	I0730 00:38:45.386132  516753 main.go:141] libmachine: (ha-161305-m03)     <disk type='file' device='disk'>
	I0730 00:38:45.386149  516753 main.go:141] libmachine: (ha-161305-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:38:45.386166  516753 main.go:141] libmachine: (ha-161305-m03)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/ha-161305-m03.rawdisk'/>
	I0730 00:38:45.386180  516753 main.go:141] libmachine: (ha-161305-m03)       <target dev='hda' bus='virtio'/>
	I0730 00:38:45.386191  516753 main.go:141] libmachine: (ha-161305-m03)     </disk>
	I0730 00:38:45.386201  516753 main.go:141] libmachine: (ha-161305-m03)     <interface type='network'>
	I0730 00:38:45.386211  516753 main.go:141] libmachine: (ha-161305-m03)       <source network='mk-ha-161305'/>
	I0730 00:38:45.386226  516753 main.go:141] libmachine: (ha-161305-m03)       <model type='virtio'/>
	I0730 00:38:45.386236  516753 main.go:141] libmachine: (ha-161305-m03)     </interface>
	I0730 00:38:45.386247  516753 main.go:141] libmachine: (ha-161305-m03)     <interface type='network'>
	I0730 00:38:45.386262  516753 main.go:141] libmachine: (ha-161305-m03)       <source network='default'/>
	I0730 00:38:45.386273  516753 main.go:141] libmachine: (ha-161305-m03)       <model type='virtio'/>
	I0730 00:38:45.386290  516753 main.go:141] libmachine: (ha-161305-m03)     </interface>
	I0730 00:38:45.386308  516753 main.go:141] libmachine: (ha-161305-m03)     <serial type='pty'>
	I0730 00:38:45.386322  516753 main.go:141] libmachine: (ha-161305-m03)       <target port='0'/>
	I0730 00:38:45.386332  516753 main.go:141] libmachine: (ha-161305-m03)     </serial>
	I0730 00:38:45.386344  516753 main.go:141] libmachine: (ha-161305-m03)     <console type='pty'>
	I0730 00:38:45.386355  516753 main.go:141] libmachine: (ha-161305-m03)       <target type='serial' port='0'/>
	I0730 00:38:45.386365  516753 main.go:141] libmachine: (ha-161305-m03)     </console>
	I0730 00:38:45.386377  516753 main.go:141] libmachine: (ha-161305-m03)     <rng model='virtio'>
	I0730 00:38:45.386386  516753 main.go:141] libmachine: (ha-161305-m03)       <backend model='random'>/dev/random</backend>
	I0730 00:38:45.386396  516753 main.go:141] libmachine: (ha-161305-m03)     </rng>
	I0730 00:38:45.386408  516753 main.go:141] libmachine: (ha-161305-m03)     
	I0730 00:38:45.386418  516753 main.go:141] libmachine: (ha-161305-m03)     
	I0730 00:38:45.386432  516753 main.go:141] libmachine: (ha-161305-m03)   </devices>
	I0730 00:38:45.386445  516753 main.go:141] libmachine: (ha-161305-m03) </domain>
	I0730 00:38:45.386454  516753 main.go:141] libmachine: (ha-161305-m03) 
	I0730 00:38:45.393444  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:17:86:3b in network default
	I0730 00:38:45.394024  516753 main.go:141] libmachine: (ha-161305-m03) Ensuring networks are active...
	I0730 00:38:45.394047  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:45.394772  516753 main.go:141] libmachine: (ha-161305-m03) Ensuring network default is active
	I0730 00:38:45.394991  516753 main.go:141] libmachine: (ha-161305-m03) Ensuring network mk-ha-161305 is active
	I0730 00:38:45.395403  516753 main.go:141] libmachine: (ha-161305-m03) Getting domain xml...
	I0730 00:38:45.396108  516753 main.go:141] libmachine: (ha-161305-m03) Creating domain...
	I0730 00:38:46.631653  516753 main.go:141] libmachine: (ha-161305-m03) Waiting to get IP...
	I0730 00:38:46.632600  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:46.633076  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:46.633104  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:46.633057  517582 retry.go:31] will retry after 251.235798ms: waiting for machine to come up
	I0730 00:38:46.885588  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:46.885991  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:46.886025  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:46.885933  517582 retry.go:31] will retry after 331.91891ms: waiting for machine to come up
	I0730 00:38:47.219503  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:47.219871  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:47.219898  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:47.219824  517582 retry.go:31] will retry after 463.441174ms: waiting for machine to come up
	I0730 00:38:47.684510  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:47.684934  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:47.684957  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:47.684908  517582 retry.go:31] will retry after 367.134484ms: waiting for machine to come up
	I0730 00:38:48.053448  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:48.053963  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:48.053998  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:48.053906  517582 retry.go:31] will retry after 592.153453ms: waiting for machine to come up
	I0730 00:38:48.647392  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:48.647853  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:48.647880  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:48.647791  517582 retry.go:31] will retry after 808.251785ms: waiting for machine to come up
	I0730 00:38:49.457338  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:49.457669  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:49.457705  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:49.457626  517582 retry.go:31] will retry after 1.15599727s: waiting for machine to come up
	I0730 00:38:50.615145  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:50.615601  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:50.615622  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:50.615575  517582 retry.go:31] will retry after 1.157106732s: waiting for machine to come up
	I0730 00:38:51.773825  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:51.774237  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:51.774266  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:51.774183  517582 retry.go:31] will retry after 1.822875974s: waiting for machine to come up
	I0730 00:38:53.598782  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:53.599392  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:53.599422  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:53.599335  517582 retry.go:31] will retry after 2.16104532s: waiting for machine to come up
	I0730 00:38:55.762546  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:55.763013  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:55.763044  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:55.762969  517582 retry.go:31] will retry after 2.04317933s: waiting for machine to come up
	I0730 00:38:57.807343  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:57.807731  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:57.807754  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:57.807683  517582 retry.go:31] will retry after 3.113783261s: waiting for machine to come up
	I0730 00:39:00.923093  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:00.923591  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:39:00.923625  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:39:00.923538  517582 retry.go:31] will retry after 3.618921973s: waiting for machine to come up
	I0730 00:39:04.545762  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.546279  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has current primary IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.546313  516753 main.go:141] libmachine: (ha-161305-m03) Found IP for machine: 192.168.39.23
	I0730 00:39:04.546364  516753 main.go:141] libmachine: (ha-161305-m03) Reserving static IP address...
	I0730 00:39:04.546793  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find host DHCP lease matching {name: "ha-161305-m03", mac: "52:54:00:e7:c4:d8", ip: "192.168.39.23"} in network mk-ha-161305
	I0730 00:39:04.622641  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Getting to WaitForSSH function...
	I0730 00:39:04.622677  516753 main.go:141] libmachine: (ha-161305-m03) Reserved static IP address: 192.168.39.23
	I0730 00:39:04.622690  516753 main.go:141] libmachine: (ha-161305-m03) Waiting for SSH to be available...
	I0730 00:39:04.625419  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.625849  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.625894  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.626108  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Using SSH client type: external
	I0730 00:39:04.626139  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa (-rw-------)
	I0730 00:39:04.626169  516753 main.go:141] libmachine: (ha-161305-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:39:04.626181  516753 main.go:141] libmachine: (ha-161305-m03) DBG | About to run SSH command:
	I0730 00:39:04.626197  516753 main.go:141] libmachine: (ha-161305-m03) DBG | exit 0
	I0730 00:39:04.752761  516753 main.go:141] libmachine: (ha-161305-m03) DBG | SSH cmd err, output: <nil>: 
	I0730 00:39:04.753145  516753 main.go:141] libmachine: (ha-161305-m03) KVM machine creation complete!
	I0730 00:39:04.753483  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetConfigRaw
	I0730 00:39:04.754205  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:04.754443  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:04.754629  516753 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:39:04.754646  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:39:04.756020  516753 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:39:04.756037  516753 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:39:04.756045  516753 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:39:04.756054  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:04.758362  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.758708  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.758741  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.758835  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:04.759044  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.759222  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.759369  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:04.759575  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:04.759805  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:04.759819  516753 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:39:04.863976  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:39:04.864005  516753 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:39:04.864012  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:04.867492  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.868000  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.868032  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.868215  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:04.868409  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.868584  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.868750  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:04.868945  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:04.869116  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:04.869126  516753 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:39:04.973058  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:39:04.973138  516753 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:39:04.973148  516753 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:39:04.973157  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:39:04.973453  516753 buildroot.go:166] provisioning hostname "ha-161305-m03"
	I0730 00:39:04.973483  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:39:04.973700  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:04.976343  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.976695  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.976748  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.976917  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:04.977127  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.977296  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.977500  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:04.977692  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:04.977887  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:04.977902  516753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305-m03 && echo "ha-161305-m03" | sudo tee /etc/hostname
	I0730 00:39:05.098460  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305-m03
	
	I0730 00:39:05.098495  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.101323  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.101703  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.101731  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.101934  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.102170  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.102360  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.102522  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.102711  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:05.102923  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:05.102940  516753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:39:05.220395  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:39:05.220440  516753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:39:05.220467  516753 buildroot.go:174] setting up certificates
	I0730 00:39:05.220481  516753 provision.go:84] configureAuth start
	I0730 00:39:05.220496  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:39:05.220829  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:05.223171  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.223547  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.223573  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.223736  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.226024  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.226412  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.226435  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.226602  516753 provision.go:143] copyHostCerts
	I0730 00:39:05.226637  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:39:05.226688  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:39:05.226707  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:39:05.226793  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:39:05.226889  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:39:05.226916  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:39:05.226926  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:39:05.226965  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:39:05.227032  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:39:05.227055  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:39:05.227064  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:39:05.227095  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:39:05.227166  516753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305-m03 san=[127.0.0.1 192.168.39.23 ha-161305-m03 localhost minikube]
	I0730 00:39:05.282372  516753 provision.go:177] copyRemoteCerts
	I0730 00:39:05.282436  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:39:05.282463  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.285547  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.285901  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.285931  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.286184  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.286417  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.286607  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.286757  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:05.371512  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:39:05.371617  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:39:05.396955  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:39:05.397049  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 00:39:05.419732  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:39:05.419815  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:39:05.441378  516753 provision.go:87] duration metric: took 220.880297ms to configureAuth
	I0730 00:39:05.441410  516753 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:39:05.441675  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:39:05.441767  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.444532  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.444901  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.444928  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.445121  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.445349  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.445556  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.445714  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.445916  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:05.446080  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:05.446095  516753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:39:05.710472  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:39:05.710500  516753 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:39:05.710508  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetURL
	I0730 00:39:05.711727  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Using libvirt version 6000000
	I0730 00:39:05.715119  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.715632  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.715658  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.715826  516753 main.go:141] libmachine: Docker is up and running!
	I0730 00:39:05.715842  516753 main.go:141] libmachine: Reticulating splines...
	I0730 00:39:05.715850  516753 client.go:171] duration metric: took 20.682531918s to LocalClient.Create
	I0730 00:39:05.715875  516753 start.go:167] duration metric: took 20.682615707s to libmachine.API.Create "ha-161305"
	I0730 00:39:05.715882  516753 start.go:293] postStartSetup for "ha-161305-m03" (driver="kvm2")
	I0730 00:39:05.715892  516753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:39:05.715908  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.716143  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:39:05.716174  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.718445  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.718857  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.718884  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.719053  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.719256  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.719449  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.719603  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:05.806900  516753 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:39:05.810896  516753 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:39:05.810921  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:39:05.810980  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:39:05.811076  516753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:39:05.811087  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:39:05.811169  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:39:05.819685  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:39:05.841872  516753 start.go:296] duration metric: took 125.975471ms for postStartSetup
	I0730 00:39:05.841926  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetConfigRaw
	I0730 00:39:05.842548  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:05.845348  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.845781  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.845807  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.846198  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:39:05.846441  516753 start.go:128] duration metric: took 20.832069779s to createHost
	I0730 00:39:05.846474  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.848982  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.849383  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.849412  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.849571  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.849769  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.849938  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.850086  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.850284  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:05.850456  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:05.850466  516753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:39:05.957277  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722299945.928805840
	
	I0730 00:39:05.957310  516753 fix.go:216] guest clock: 1722299945.928805840
	I0730 00:39:05.957318  516753 fix.go:229] Guest: 2024-07-30 00:39:05.92880584 +0000 UTC Remote: 2024-07-30 00:39:05.846456904 +0000 UTC m=+157.216279571 (delta=82.348936ms)
	I0730 00:39:05.957337  516753 fix.go:200] guest clock delta is within tolerance: 82.348936ms
	I0730 00:39:05.957343  516753 start.go:83] releasing machines lock for "ha-161305-m03", held for 20.943120972s
	I0730 00:39:05.957361  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.957662  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:05.960319  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.960668  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.960697  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.963170  516753 out.go:177] * Found network options:
	I0730 00:39:05.964611  516753 out.go:177]   - NO_PROXY=192.168.39.80,192.168.39.126
	W0730 00:39:05.965865  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 00:39:05.965887  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:39:05.965904  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.966503  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.966712  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.966827  516753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:39:05.966876  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	W0730 00:39:05.966903  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 00:39:05.966925  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:39:05.967033  516753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:39:05.967059  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.969953  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970276  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.970306  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970354  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970577  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.970798  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.970852  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.970877  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970971  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.971055  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.971141  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:05.971194  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.971342  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.971496  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:06.209106  516753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:39:06.215673  516753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:39:06.215743  516753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:39:06.232821  516753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:39:06.232845  516753 start.go:495] detecting cgroup driver to use...
	I0730 00:39:06.232924  516753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:39:06.248818  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:39:06.262755  516753 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:39:06.262815  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:39:06.276401  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:39:06.290150  516753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:39:06.417763  516753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:39:06.559300  516753 docker.go:233] disabling docker service ...
	I0730 00:39:06.559399  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:39:06.578963  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:39:06.591263  516753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:39:06.722677  516753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:39:06.833582  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:39:06.847857  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:39:06.866197  516753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:39:06.866269  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.878077  516753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:39:06.878143  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.888444  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.898494  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.908498  516753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:39:06.918372  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.928530  516753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.945248  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.955740  516753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:39:06.965090  516753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:39:06.965160  516753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:39:06.978702  516753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:39:06.989889  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:39:07.105796  516753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:39:07.247139  516753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:39:07.247236  516753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:39:07.251631  516753 start.go:563] Will wait 60s for crictl version
	I0730 00:39:07.251693  516753 ssh_runner.go:195] Run: which crictl
	I0730 00:39:07.255268  516753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:39:07.292292  516753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:39:07.292369  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:39:07.320137  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:39:07.351426  516753 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:39:07.352885  516753 out.go:177]   - env NO_PROXY=192.168.39.80
	I0730 00:39:07.354075  516753 out.go:177]   - env NO_PROXY=192.168.39.80,192.168.39.126
	I0730 00:39:07.355118  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:07.357961  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:07.358318  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:07.358354  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:07.358612  516753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:39:07.362574  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:39:07.374603  516753 mustload.go:65] Loading cluster: ha-161305
	I0730 00:39:07.374857  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:39:07.375118  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:39:07.375162  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:39:07.390803  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0730 00:39:07.391252  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:39:07.391810  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:39:07.391832  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:39:07.392172  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:39:07.392366  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:39:07.394068  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:39:07.394385  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:39:07.394422  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:39:07.409550  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I0730 00:39:07.409931  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:39:07.410364  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:39:07.410389  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:39:07.410767  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:39:07.410999  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:39:07.411175  516753 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.23
	I0730 00:39:07.411188  516753 certs.go:194] generating shared ca certs ...
	I0730 00:39:07.411202  516753 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:39:07.411368  516753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:39:07.411409  516753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:39:07.411420  516753 certs.go:256] generating profile certs ...
	I0730 00:39:07.411491  516753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:39:07.411514  516753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed
	I0730 00:39:07.411528  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.23 192.168.39.254]
	I0730 00:39:07.498421  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed ...
	I0730 00:39:07.498457  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed: {Name:mka51ce7224e7be62982785ca0a5d827177c78bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:39:07.498659  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed ...
	I0730 00:39:07.498676  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed: {Name:mke31ca91f4cf5aa80f2d78bd811dd38219b955c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:39:07.498774  516753 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:39:07.498914  516753 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:39:07.499045  516753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:39:07.499063  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:39:07.499076  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:39:07.499091  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:39:07.499104  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:39:07.499118  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:39:07.499130  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:39:07.499144  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:39:07.499156  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:39:07.499205  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:39:07.499232  516753 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:39:07.499241  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:39:07.499260  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:39:07.499281  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:39:07.499301  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:39:07.499350  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:39:07.499375  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:07.499387  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:39:07.499399  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:39:07.499433  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:39:07.502457  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:07.502869  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:39:07.502894  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:07.503074  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:39:07.503304  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:39:07.503452  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:39:07.503564  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:39:07.581193  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0730 00:39:07.586347  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0730 00:39:07.596873  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0730 00:39:07.600660  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0730 00:39:07.612128  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0730 00:39:07.616807  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0730 00:39:07.627060  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0730 00:39:07.631688  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0730 00:39:07.642957  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0730 00:39:07.646916  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0730 00:39:07.657049  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0730 00:39:07.661782  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0730 00:39:07.673347  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:39:07.700025  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:39:07.728378  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:39:07.756115  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:39:07.781783  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0730 00:39:07.805695  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:39:07.829008  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:39:07.852820  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:39:07.875139  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:39:07.898794  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:39:07.921496  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:39:07.946220  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0730 00:39:07.961805  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0730 00:39:07.977857  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0730 00:39:07.997661  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0730 00:39:08.014452  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0730 00:39:08.031804  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0730 00:39:08.047186  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0730 00:39:08.062466  516753 ssh_runner.go:195] Run: openssl version
	I0730 00:39:08.067840  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:39:08.078012  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:08.082724  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:08.082796  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:08.088185  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:39:08.098493  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:39:08.109250  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:39:08.113938  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:39:08.114000  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:39:08.119602  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:39:08.130088  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:39:08.141150  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:39:08.145107  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:39:08.145171  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:39:08.151000  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:39:08.161268  516753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:39:08.165143  516753 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:39:08.165221  516753 kubeadm.go:934] updating node {m03 192.168.39.23 8443 v1.30.3 crio true true} ...
	I0730 00:39:08.165330  516753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:39:08.165363  516753 kube-vip.go:115] generating kube-vip config ...
	I0730 00:39:08.165408  516753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:39:08.182050  516753 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:39:08.182139  516753 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:39:08.182221  516753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:39:08.193035  516753 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0730 00:39:08.193101  516753 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0730 00:39:08.202908  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0730 00:39:08.202919  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0730 00:39:08.202941  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:39:08.202940  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0730 00:39:08.202962  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:39:08.202963  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:39:08.203014  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:39:08.203028  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:39:08.220000  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0730 00:39:08.220045  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0730 00:39:08.220052  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:39:08.220073  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0730 00:39:08.220090  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0730 00:39:08.220281  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:39:08.251715  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0730 00:39:08.251761  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0730 00:39:09.090942  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0730 00:39:09.100443  516753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0730 00:39:09.116571  516753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:39:09.132612  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:39:09.149682  516753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:39:09.153360  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:39:09.164571  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:39:09.283359  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:39:09.300550  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:39:09.300931  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:39:09.300988  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:39:09.317396  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0730 00:39:09.318001  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:39:09.318630  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:39:09.318657  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:39:09.319044  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:39:09.319266  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:39:09.319465  516753 start.go:317] joinCluster: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:39:09.319652  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0730 00:39:09.319681  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:39:09.322968  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:09.323447  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:39:09.323489  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:09.323654  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:39:09.323827  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:39:09.323939  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:39:09.324058  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:39:09.484200  516753 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:39:09.484264  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token th4l0z.ino3nmjzd3n2m912 --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443"
	I0730 00:39:32.818813  516753 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token th4l0z.ino3nmjzd3n2m912 --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443": (23.334518779s)
	I0730 00:39:32.818856  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0730 00:39:33.419606  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-161305-m03 minikube.k8s.io/updated_at=2024_07_30T00_39_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=ha-161305 minikube.k8s.io/primary=false
	I0730 00:39:33.536762  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-161305-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0730 00:39:33.632378  516753 start.go:319] duration metric: took 24.312908762s to joinCluster
	I0730 00:39:33.632491  516753 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:39:33.632858  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:39:33.634081  516753 out.go:177] * Verifying Kubernetes components...
	I0730 00:39:33.635418  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:39:33.911969  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:39:33.930050  516753 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:39:33.930273  516753 kapi.go:59] client config for ha-161305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0730 00:39:33.930329  516753 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.80:8443
	I0730 00:39:33.930542  516753 node_ready.go:35] waiting up to 6m0s for node "ha-161305-m03" to be "Ready" ...
	I0730 00:39:33.930632  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:33.930641  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:33.930648  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:33.930652  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:33.934269  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:34.431783  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:34.431808  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:34.431819  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:34.431824  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:34.435252  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:34.931555  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:34.931579  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:34.931592  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:34.931599  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:34.935359  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:35.430967  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:35.430994  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:35.431009  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:35.431018  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:35.437104  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:35.930808  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:35.930831  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:35.930839  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:35.930844  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:35.933668  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:35.934539  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:36.431520  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:36.431551  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:36.431563  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:36.431570  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:36.435119  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:36.931515  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:36.931542  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:36.931551  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:36.931556  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:36.935366  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:37.431003  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:37.431024  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:37.431031  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:37.431037  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:37.434908  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:37.931483  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:37.931511  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:37.931523  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:37.931528  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:37.936330  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:37.936935  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:38.431257  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:38.431287  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:38.431296  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:38.431300  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:38.435020  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:38.930774  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:38.930798  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:38.930806  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:38.930809  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:38.934630  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:39.430899  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:39.430927  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:39.430939  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:39.430945  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:39.435151  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:39.931824  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:39.931858  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:39.931870  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:39.931876  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:39.935552  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:40.430822  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:40.430844  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:40.430852  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:40.430857  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:40.437458  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:40.438137  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:40.930996  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:40.931022  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:40.931040  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:40.931047  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:40.934641  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:41.431390  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:41.431414  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:41.431425  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:41.431431  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:41.436495  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:39:41.931643  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:41.931671  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:41.931680  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:41.931685  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:41.935175  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:42.431307  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:42.431332  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:42.431343  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:42.431349  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:42.434611  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:42.931405  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:42.931428  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:42.931437  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:42.931441  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:42.934995  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:42.935591  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:43.431646  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:43.431670  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:43.431678  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:43.431681  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:43.435512  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:43.931208  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:43.931237  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:43.931260  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:43.931268  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:43.934720  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:44.430980  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:44.431004  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:44.431012  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:44.431018  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:44.434486  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:44.931589  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:44.931617  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:44.931627  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:44.931633  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:44.935406  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:44.935958  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:45.430795  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:45.430818  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:45.430826  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:45.430831  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:45.434122  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:45.931158  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:45.931179  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:45.931187  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:45.931192  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:45.934698  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:46.430848  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:46.430872  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:46.430880  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:46.430884  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:46.434288  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:46.931375  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:46.931400  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:46.931408  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:46.931411  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:46.937416  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:39:46.938108  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:47.431355  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:47.431378  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:47.431386  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:47.431390  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:47.434760  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:47.930736  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:47.930759  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:47.930768  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:47.930773  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:47.933842  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:48.431820  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:48.431850  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:48.431861  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:48.431867  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:48.435153  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:48.930802  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:48.930831  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:48.930842  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:48.930847  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:48.934475  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:49.431498  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:49.431525  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:49.431534  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:49.431538  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:49.435295  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:49.435926  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:49.931361  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:49.931387  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:49.931397  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:49.931403  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:49.934677  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:50.431114  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:50.431139  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:50.431147  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:50.431151  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:50.434111  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:50.930946  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:50.930975  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:50.930985  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:50.930989  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:50.935229  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:51.431099  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:51.431141  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.431154  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.431160  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.434671  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:51.931707  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:51.931736  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.931745  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.931749  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.934803  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:51.935647  516753 node_ready.go:49] node "ha-161305-m03" has status "Ready":"True"
	I0730 00:39:51.935674  516753 node_ready.go:38] duration metric: took 18.005114813s for node "ha-161305-m03" to be "Ready" ...
	I0730 00:39:51.935686  516753 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:39:51.935773  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:51.935786  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.935796  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.935804  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.942634  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:51.949823  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.949915  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bdpds
	I0730 00:39:51.949923  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.949931  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.949935  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.953080  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:51.953646  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:51.953659  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.953666  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.953670  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.956092  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.956511  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.956527  516753 pod_ready.go:81] duration metric: took 6.677219ms for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.956536  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.956583  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mzcln
	I0730 00:39:51.956590  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.956597  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.956603  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.958990  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.959533  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:51.959546  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.959555  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.959561  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.961627  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.962111  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.962131  516753 pod_ready.go:81] duration metric: took 5.587966ms for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.962152  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.962228  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305
	I0730 00:39:51.962237  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.962248  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.962255  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.964321  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.965030  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:51.965047  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.965058  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.965064  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.967502  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.967941  516753 pod_ready.go:92] pod "etcd-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.967965  516753 pod_ready.go:81] duration metric: took 5.793254ms for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.967976  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.968044  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m02
	I0730 00:39:51.968056  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.968072  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.968079  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.970942  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.971929  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:51.971944  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.971952  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.971955  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.974306  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.974863  516753 pod_ready.go:92] pod "etcd-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.974883  516753 pod_ready.go:81] duration metric: took 6.898155ms for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.974896  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.132180  516753 request.go:629] Waited for 157.209152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m03
	I0730 00:39:52.132248  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m03
	I0730 00:39:52.132266  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.132276  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.132283  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.135623  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:52.332592  516753 request.go:629] Waited for 196.363071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:52.332672  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:52.332680  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.332691  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.332697  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.336136  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:52.336603  516753 pod_ready.go:92] pod "etcd-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:52.336625  516753 pod_ready.go:81] duration metric: took 361.718062ms for pod "etcd-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.336651  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.531710  516753 request.go:629] Waited for 194.967886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:39:52.531791  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:39:52.531802  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.531810  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.531818  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.535463  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:52.732736  516753 request.go:629] Waited for 196.392523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:52.732798  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:52.732803  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.732810  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.732814  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.740836  516753 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0730 00:39:52.741455  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:52.741489  516753 pod_ready.go:81] duration metric: took 404.824489ms for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.741515  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.931759  516753 request.go:629] Waited for 190.119362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:39:52.931903  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:39:52.931924  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.931934  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.931940  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.936086  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:53.132667  516753 request.go:629] Waited for 195.771748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:53.132759  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:53.132770  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.132781  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.132788  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.136199  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.136656  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:53.136678  516753 pod_ready.go:81] duration metric: took 395.152635ms for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.136691  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.331783  516753 request.go:629] Waited for 194.986103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m03
	I0730 00:39:53.331846  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m03
	I0730 00:39:53.331852  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.331859  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.331865  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.335697  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.532037  516753 request.go:629] Waited for 195.386948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:53.532143  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:53.532152  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.532165  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.532172  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.535532  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.536207  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:53.536227  516753 pod_ready.go:81] duration metric: took 399.528992ms for pod "kube-apiserver-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.536238  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.732653  516753 request.go:629] Waited for 196.316924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:39:53.732739  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:39:53.732745  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.732753  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.732757  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.736421  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.932572  516753 request.go:629] Waited for 194.928773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:53.932653  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:53.932663  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.932675  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.932683  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.935878  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.936437  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:53.936458  516753 pod_ready.go:81] duration metric: took 400.209865ms for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.936468  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.132530  516753 request.go:629] Waited for 195.97688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:39:54.132594  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:39:54.132601  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.132610  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.132615  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.136152  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.332410  516753 request.go:629] Waited for 195.441902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:54.332485  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:54.332491  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.332501  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.332519  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.335629  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.336560  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:54.336582  516753 pod_ready.go:81] duration metric: took 400.107169ms for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.336592  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.532776  516753 request.go:629] Waited for 196.071018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m03
	I0730 00:39:54.532857  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m03
	I0730 00:39:54.532864  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.532872  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.532879  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.536395  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.732451  516753 request.go:629] Waited for 195.265957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:54.732547  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:54.732558  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.732568  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.732574  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.736178  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.736677  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:54.736698  516753 pod_ready.go:81] duration metric: took 400.098829ms for pod "kube-controller-manager-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.736720  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.931801  516753 request.go:629] Waited for 194.994778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:39:54.931880  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:39:54.931886  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.931908  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.931933  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.935261  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.132230  516753 request.go:629] Waited for 196.210898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:55.132325  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:55.132336  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.132348  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.132360  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.135845  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.136318  516753 pod_ready.go:92] pod "kube-proxy-pqr2f" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:55.136338  516753 pod_ready.go:81] duration metric: took 399.606227ms for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.136351  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v86sk" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.332508  516753 request.go:629] Waited for 196.05813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v86sk
	I0730 00:39:55.332590  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v86sk
	I0730 00:39:55.332601  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.332613  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.332623  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.336548  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.531742  516753 request.go:629] Waited for 194.290564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:55.531803  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:55.531816  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.531824  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.531828  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.534944  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.535498  516753 pod_ready.go:92] pod "kube-proxy-v86sk" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:55.535519  516753 pod_ready.go:81] duration metric: took 399.160843ms for pod "kube-proxy-v86sk" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.535529  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.732674  516753 request.go:629] Waited for 197.073515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:39:55.732761  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:39:55.732770  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.732779  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.732783  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.736129  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.932185  516753 request.go:629] Waited for 195.390624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:55.932257  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:55.932263  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.932272  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.932279  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.935524  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.936142  516753 pod_ready.go:92] pod "kube-proxy-wptvn" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:55.936162  516753 pod_ready.go:81] duration metric: took 400.627207ms for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.936172  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.132690  516753 request.go:629] Waited for 196.427303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:39:56.132793  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:39:56.132802  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.132810  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.132816  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.136203  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.332315  516753 request.go:629] Waited for 195.359193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:56.332390  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:56.332395  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.332403  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.332411  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.335886  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.336461  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:56.336480  516753 pod_ready.go:81] duration metric: took 400.30083ms for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.336492  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.532626  516753 request.go:629] Waited for 196.035458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:39:56.532719  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:39:56.532729  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.532741  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.532752  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.536219  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.732247  516753 request.go:629] Waited for 195.367062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:56.732315  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:56.732322  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.732332  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.732338  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.735731  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.736483  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:56.736505  516753 pod_ready.go:81] duration metric: took 400.004617ms for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.736518  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.932649  516753 request.go:629] Waited for 196.051111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m03
	I0730 00:39:56.932762  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m03
	I0730 00:39:56.932768  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.932777  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.932784  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.936237  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.132364  516753 request.go:629] Waited for 195.410488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:57.132438  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:57.132443  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.132451  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.132457  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.135772  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.136435  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:57.136456  516753 pod_ready.go:81] duration metric: took 399.929871ms for pod "kube-scheduler-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:57.136467  516753 pod_ready.go:38] duration metric: took 5.200768417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:39:57.136483  516753 api_server.go:52] waiting for apiserver process to appear ...
	I0730 00:39:57.136547  516753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:39:57.155174  516753 api_server.go:72] duration metric: took 23.522630913s to wait for apiserver process to appear ...
	I0730 00:39:57.155209  516753 api_server.go:88] waiting for apiserver healthz status ...
	I0730 00:39:57.155239  516753 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0730 00:39:57.163492  516753 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0730 00:39:57.163592  516753 round_trippers.go:463] GET https://192.168.39.80:8443/version
	I0730 00:39:57.163605  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.163618  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.163629  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.165067  516753 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0730 00:39:57.165269  516753 api_server.go:141] control plane version: v1.30.3
	I0730 00:39:57.165290  516753 api_server.go:131] duration metric: took 10.072767ms to wait for apiserver health ...
	I0730 00:39:57.165299  516753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 00:39:57.332768  516753 request.go:629] Waited for 167.351583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.332841  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.332848  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.332856  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.332864  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.341731  516753 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0730 00:39:57.348866  516753 system_pods.go:59] 24 kube-system pods found
	I0730 00:39:57.348895  516753 system_pods.go:61] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:39:57.348901  516753 system_pods.go:61] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:39:57.348907  516753 system_pods.go:61] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:39:57.348910  516753 system_pods.go:61] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:39:57.348915  516753 system_pods.go:61] "etcd-ha-161305-m03" [4f9f6485-c2e1-4288-abd9-83dd8f742e9f] Running
	I0730 00:39:57.348920  516753 system_pods.go:61] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:39:57.348925  516753 system_pods.go:61] "kindnet-x7292" [10f89bb1-e8b3-4901-b924-59401555bebd] Running
	I0730 00:39:57.348929  516753 system_pods.go:61] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:39:57.348934  516753 system_pods.go:61] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:39:57.348939  516753 system_pods.go:61] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:39:57.348946  516753 system_pods.go:61] "kube-apiserver-ha-161305-m03" [9519b474-7a17-43b5-8ad0-78340215eea1] Running
	I0730 00:39:57.348956  516753 system_pods.go:61] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:39:57.348963  516753 system_pods.go:61] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:39:57.348968  516753 system_pods.go:61] "kube-controller-manager-ha-161305-m03" [89d7e90c-024c-498e-9f64-6ea95255e90e] Running
	I0730 00:39:57.348977  516753 system_pods.go:61] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:39:57.348982  516753 system_pods.go:61] "kube-proxy-v86sk" [daba82b2-fd20-4b41-bba0-e8927cb91f2e] Running
	I0730 00:39:57.348989  516753 system_pods.go:61] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:39:57.348997  516753 system_pods.go:61] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:39:57.349002  516753 system_pods.go:61] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:39:57.349009  516753 system_pods.go:61] "kube-scheduler-ha-161305-m03" [0df78a8a-e986-43a8-b8b5-0a2ce029b53b] Running
	I0730 00:39:57.349014  516753 system_pods.go:61] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:39:57.349025  516753 system_pods.go:61] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:39:57.349029  516753 system_pods.go:61] "kube-vip-ha-161305-m03" [9e075c09-55e3-4669-acb0-b53947d96691] Running
	I0730 00:39:57.349031  516753 system_pods.go:61] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:39:57.349038  516753 system_pods.go:74] duration metric: took 183.733644ms to wait for pod list to return data ...
	I0730 00:39:57.349050  516753 default_sa.go:34] waiting for default service account to be created ...
	I0730 00:39:57.531719  516753 request.go:629] Waited for 182.570496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:39:57.531787  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:39:57.531794  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.531802  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.531806  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.535347  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.535514  516753 default_sa.go:45] found service account: "default"
	I0730 00:39:57.535534  516753 default_sa.go:55] duration metric: took 186.471929ms for default service account to be created ...
	I0730 00:39:57.535558  516753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 00:39:57.731815  516753 request.go:629] Waited for 196.17079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.731891  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.731901  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.731913  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.731924  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.738135  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:57.744635  516753 system_pods.go:86] 24 kube-system pods found
	I0730 00:39:57.744666  516753 system_pods.go:89] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:39:57.744674  516753 system_pods.go:89] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:39:57.744680  516753 system_pods.go:89] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:39:57.744686  516753 system_pods.go:89] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:39:57.744692  516753 system_pods.go:89] "etcd-ha-161305-m03" [4f9f6485-c2e1-4288-abd9-83dd8f742e9f] Running
	I0730 00:39:57.744699  516753 system_pods.go:89] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:39:57.744717  516753 system_pods.go:89] "kindnet-x7292" [10f89bb1-e8b3-4901-b924-59401555bebd] Running
	I0730 00:39:57.744727  516753 system_pods.go:89] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:39:57.744737  516753 system_pods.go:89] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:39:57.744747  516753 system_pods.go:89] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:39:57.744756  516753 system_pods.go:89] "kube-apiserver-ha-161305-m03" [9519b474-7a17-43b5-8ad0-78340215eea1] Running
	I0730 00:39:57.744764  516753 system_pods.go:89] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:39:57.744772  516753 system_pods.go:89] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:39:57.744778  516753 system_pods.go:89] "kube-controller-manager-ha-161305-m03" [89d7e90c-024c-498e-9f64-6ea95255e90e] Running
	I0730 00:39:57.744784  516753 system_pods.go:89] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:39:57.744792  516753 system_pods.go:89] "kube-proxy-v86sk" [daba82b2-fd20-4b41-bba0-e8927cb91f2e] Running
	I0730 00:39:57.744801  516753 system_pods.go:89] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:39:57.744810  516753 system_pods.go:89] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:39:57.744819  516753 system_pods.go:89] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:39:57.744827  516753 system_pods.go:89] "kube-scheduler-ha-161305-m03" [0df78a8a-e986-43a8-b8b5-0a2ce029b53b] Running
	I0730 00:39:57.744834  516753 system_pods.go:89] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:39:57.744842  516753 system_pods.go:89] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:39:57.744849  516753 system_pods.go:89] "kube-vip-ha-161305-m03" [9e075c09-55e3-4669-acb0-b53947d96691] Running
	I0730 00:39:57.744858  516753 system_pods.go:89] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:39:57.744867  516753 system_pods.go:126] duration metric: took 209.300644ms to wait for k8s-apps to be running ...
	I0730 00:39:57.744880  516753 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 00:39:57.744934  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:39:57.760382  516753 system_svc.go:56] duration metric: took 15.49353ms WaitForService to wait for kubelet
	I0730 00:39:57.760461  516753 kubeadm.go:582] duration metric: took 24.127926379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:39:57.760490  516753 node_conditions.go:102] verifying NodePressure condition ...
	I0730 00:39:57.931819  516753 request.go:629] Waited for 171.253197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes
	I0730 00:39:57.931881  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes
	I0730 00:39:57.931887  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.931895  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.931899  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.935672  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.936915  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:39:57.936942  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:39:57.936958  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:39:57.936963  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:39:57.936969  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:39:57.936974  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:39:57.936980  516753 node_conditions.go:105] duration metric: took 176.483522ms to run NodePressure ...
	I0730 00:39:57.937021  516753 start.go:241] waiting for startup goroutines ...
	I0730 00:39:57.937052  516753 start.go:255] writing updated cluster config ...
	I0730 00:39:57.937365  516753 ssh_runner.go:195] Run: rm -f paused
	I0730 00:39:57.992477  516753 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0730 00:39:57.994477  516753 out.go:177] * Done! kubectl is now configured to use "ha-161305" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.113116960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300216113092671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1bbe6d8-567f-48e8-ad14-4f50983c969c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.113654951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de67830e-7e28-49db-ad00-4526485ac012 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.113710031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de67830e-7e28-49db-ad00-4526485ac012 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.114046385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de67830e-7e28-49db-ad00-4526485ac012 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.148458162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=558c5973-f3bd-491e-9ba1-24c752ff5783 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.148540898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=558c5973-f3bd-491e-9ba1-24c752ff5783 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.150289707Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=844f756d-bb51-47f3-b979-26495c85451f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.152532019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300216152499567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=844f756d-bb51-47f3-b979-26495c85451f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.155430473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc39c041-c20b-4b6a-af2f-0b6a7bd4c03d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.155665288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc39c041-c20b-4b6a-af2f-0b6a7bd4c03d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.155913700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc39c041-c20b-4b6a-af2f-0b6a7bd4c03d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.194519589Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=508c6f3f-0087-46a6-b78c-780a8365310c name=/runtime.v1.RuntimeService/Version
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.194619794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=508c6f3f-0087-46a6-b78c-780a8365310c name=/runtime.v1.RuntimeService/Version
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.195857282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ee3cb3b-497d-483d-b2b0-76120631d796 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.196379627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300216196358219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ee3cb3b-497d-483d-b2b0-76120631d796 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.197101564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b689b12c-5670-488d-80c3-488cae1fdacb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.197154437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b689b12c-5670-488d-80c3-488cae1fdacb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.197561038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b689b12c-5670-488d-80c3-488cae1fdacb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.238838340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6da39359-0ae8-4eec-9af3-2d68002d52ed name=/runtime.v1.RuntimeService/Version
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.238939995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6da39359-0ae8-4eec-9af3-2d68002d52ed name=/runtime.v1.RuntimeService/Version
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.240322728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b60fbf78-0c89-49f0-be66-ae22654e4723 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.240952312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300216240922351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b60fbf78-0c89-49f0-be66-ae22654e4723 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.241646809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7263555-ad87-43e5-aefc-ba71d0599c3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.241701035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7263555-ad87-43e5-aefc-ba71d0599c3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:43:36 ha-161305 crio[686]: time="2024-07-30 00:43:36.242161927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7263555-ad87-43e5-aefc-ba71d0599c3c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33787e97a5dca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1ce43d8d3ab67       busybox-fc5497c4f-ttjx8
	2b2f636edadaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   5d3af1b83b992       coredns-7db6d8ff4d-bdpds
	f6480acdda7d5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   fb1702cc41245       coredns-7db6d8ff4d-mzcln
	922c527ae0dbe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   dc6671f8236d5       storage-provisioner
	625a67c138c38       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   ceb9cb15a729f       kindnet-zrzxf
	1805553d07226       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   5821d52c1a1dd       kube-proxy-wptvn
	3d24c7873d038       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   f2dde65522fc0       kube-vip-ha-161305
	a2084c9181292       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   3f0cef29badb6       etcd-ha-161305
	0555b883473bf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   e8cce281b6801       kube-controller-manager-ha-161305
	16a5f7eb1118e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   cb4dface16b38       kube-scheduler-ha-161305
	c20fcb6fb9f2b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   22c993ee11245       kube-apiserver-ha-161305
	
	
	==> coredns [2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81] <==
	[INFO] 10.244.2.2:60483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003051093s
	[INFO] 10.244.0.4:59591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146746s
	[INFO] 10.244.0.4:40956 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850341s
	[INFO] 10.244.0.4:35576 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165468s
	[INFO] 10.244.0.4:58077 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012218s
	[INFO] 10.244.0.4:49078 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000206386s
	[INFO] 10.244.1.2:48352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113505s
	[INFO] 10.244.1.2:37780 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816793s
	[INFO] 10.244.1.2:33649 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128148s
	[INFO] 10.244.1.2:48051 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092471s
	[INFO] 10.244.1.2:36198 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007191s
	[INFO] 10.244.2.2:35489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018657s
	[INFO] 10.244.2.2:54354 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142599s
	[INFO] 10.244.2.2:58953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134101s
	[INFO] 10.244.2.2:60956 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078404s
	[INFO] 10.244.0.4:45817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115908s
	[INFO] 10.244.1.2:38448 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117252s
	[INFO] 10.244.1.2:37783 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087436s
	[INFO] 10.244.2.2:44186 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138301s
	[INFO] 10.244.0.4:42700 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074904s
	[INFO] 10.244.0.4:41284 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112024s
	[INFO] 10.244.0.4:39360 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000096229s
	[INFO] 10.244.1.2:35167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095182s
	[INFO] 10.244.1.2:37860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007318s
	[INFO] 10.244.1.2:40179 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076418s
	
	
	==> coredns [f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b] <==
	[INFO] 10.244.2.2:34155 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.007684984s
	[INFO] 10.244.0.4:48164 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015844953s
	[INFO] 10.244.0.4:37925 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001742548s
	[INFO] 10.244.1.2:60200 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000507695s
	[INFO] 10.244.2.2:54293 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003671077s
	[INFO] 10.244.2.2:59859 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017939s
	[INFO] 10.244.2.2:41789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144993s
	[INFO] 10.244.2.2:46813 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143383s
	[INFO] 10.244.2.2:35590 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107787s
	[INFO] 10.244.0.4:40333 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147444s
	[INFO] 10.244.0.4:41070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094914s
	[INFO] 10.244.0.4:60015 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119517s
	[INFO] 10.244.1.2:41685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405792s
	[INFO] 10.244.1.2:48444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009825s
	[INFO] 10.244.1.2:38476 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107007s
	[INFO] 10.244.0.4:41768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098341s
	[INFO] 10.244.0.4:54976 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067321s
	[INFO] 10.244.0.4:60391 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053259s
	[INFO] 10.244.1.2:36807 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164322s
	[INFO] 10.244.1.2:38239 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011686s
	[INFO] 10.244.2.2:58831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129058s
	[INFO] 10.244.2.2:56804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134761s
	[INFO] 10.244.2.2:41613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109006s
	[INFO] 10.244.0.4:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155306s
	[INFO] 10.244.1.2:58876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114279s
	
	
	==> describe nodes <==
	Name:               ha-161305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_37_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:37:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:43:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-161305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee5b503318a04d5fa9f6151b095f43f6
	  System UUID:                ee5b5033-18a0-4d5f-a9f6-151b095f43f6
	  Boot ID:                    c41944eb-218c-41cb-bf89-ac90ba0a8709
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ttjx8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-bdpds             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m15s
	  kube-system                 coredns-7db6d8ff4d-mzcln             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m15s
	  kube-system                 etcd-ha-161305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m28s
	  kube-system                 kindnet-zrzxf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m15s
	  kube-system                 kube-apiserver-ha-161305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-controller-manager-ha-161305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-proxy-wptvn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-scheduler-ha-161305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-vip-ha-161305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s  kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s  kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s  kubelet          Node ha-161305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s  kubelet          Node ha-161305 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m15s  node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal  NodeReady                6m     kubelet          Node ha-161305 status is now: NodeReady
	  Normal  RegisteredNode           4m59s  node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal  RegisteredNode           3m48s  node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	
	
	Name:               ha-161305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_38_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:38:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:41:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-161305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a157fd7e5c14479d97024c5548311976
	  System UUID:                a157fd7e-5c14-479d-9702-4c5548311976
	  Boot ID:                    a3712653-a4cd-4869-89f3-eb00a1ea7923
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v2pq7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-161305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 kindnet-dj7v2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m16s
	  kube-system                 kube-apiserver-ha-161305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-ha-161305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-proxy-pqr2f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-scheduler-ha-161305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-vip-ha-161305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m17s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m17s)  kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m17s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-161305-m02 status is now: NodeNotReady
	
	
	Name:               ha-161305-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_39_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:39:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:43:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-161305-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 879cbedc505f4ed1b9b3132464b6d69b
	  System UUID:                879cbedc-505f-4ed1-b9b3-132464b6d69b
	  Boot ID:                    c32c8962-f039-4ee5-9802-63544120ba8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k6rhx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-161305-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m4s
	  kube-system                 kindnet-x7292                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-apiserver-ha-161305-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-controller-manager-ha-161305-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-v86sk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-ha-161305-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-vip-ha-161305-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-161305-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-161305-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-161305-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal  RegisteredNode           3m48s                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	
	
	Name:               ha-161305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_40_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:43:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-161305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16981c9b42447afa5527547ca393cc7
	  System UUID:                b16981c9-b424-47af-a552-7547ca393cc7
	  Boot ID:                    e58479dc-cbf7-4760-8235-442459f77a42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bdl2h       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-f9bfb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-161305-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul30 00:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050562] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036109] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.703075] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.709633] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.532343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.201013] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.060589] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060160] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.175750] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.105381] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.262727] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +3.969960] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[Jul30 00:37] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.953682] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.085875] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.685156] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.526010] kauditd_printk_skb: 38 callbacks suppressed
	[Jul30 00:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb] <==
	{"level":"warn","ts":"2024-07-30T00:43:36.522569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.527299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.544144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.545795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.549201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.582286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.605205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.622912Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.627753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.634116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.637306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.645156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.650174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.655173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.659729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.663727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.670677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.676313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.681891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.685477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.688236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.693459Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.698832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.704535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:43:36.729131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:43:36 up 7 min,  0 users,  load average: 0.21, 0.40, 0.25
	Linux ha-161305 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0] <==
	I0730 00:43:06.765492       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:43:16.764203       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:43:16.764252       1 main.go:299] handling current node
	I0730 00:43:16.764272       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:43:16.764280       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:43:16.764449       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:43:16.764480       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:43:16.764571       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:43:16.764597       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:43:26.757363       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:43:26.757457       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:43:26.757594       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:43:26.757650       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:43:26.757731       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:43:26.757751       1 main.go:299] handling current node
	I0730 00:43:26.757773       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:43:26.757788       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:43:36.757457       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:43:36.757527       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:43:36.757825       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:43:36.757852       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:43:36.757930       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:43:36.757940       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:43:36.758057       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:43:36.758080       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2] <==
	I0730 00:37:07.019277       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0730 00:37:07.025655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.80]
	I0730 00:37:07.026740       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 00:37:07.032606       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0730 00:37:07.224489       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0730 00:37:08.453762       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0730 00:37:08.481298       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0730 00:37:08.492607       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0730 00:37:21.438941       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0730 00:37:21.490268       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0730 00:40:03.588802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32978: use of closed network connection
	E0730 00:40:03.790532       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32994: use of closed network connection
	E0730 00:40:04.001361       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33010: use of closed network connection
	E0730 00:40:04.196288       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33032: use of closed network connection
	E0730 00:40:04.405598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33058: use of closed network connection
	E0730 00:40:04.585868       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33074: use of closed network connection
	E0730 00:40:04.756018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33096: use of closed network connection
	E0730 00:40:04.938605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33126: use of closed network connection
	E0730 00:40:05.127204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33134: use of closed network connection
	E0730 00:40:05.432569       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33158: use of closed network connection
	E0730 00:40:05.605589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33174: use of closed network connection
	E0730 00:40:05.780589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33198: use of closed network connection
	E0730 00:40:05.955794       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33214: use of closed network connection
	E0730 00:40:06.149844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33240: use of closed network connection
	E0730 00:40:06.322780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33260: use of closed network connection
	
	
	==> kube-controller-manager [0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552] <==
	E0730 00:39:30.241549       1 certificate_controller.go:146] Sync csr-sszsg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-sszsg": the object has been modified; please apply your changes to the latest version and try again
	I0730 00:39:30.337893       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-161305-m03\" does not exist"
	I0730 00:39:30.363379       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-161305-m03" podCIDRs=["10.244.2.0/24"]
	I0730 00:39:31.464649       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305-m03"
	I0730 00:39:58.967624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.031261ms"
	I0730 00:39:59.092055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="123.63166ms"
	I0730 00:39:59.296514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="204.312045ms"
	I0730 00:39:59.388523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.947189ms"
	I0730 00:39:59.388810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.333µs"
	I0730 00:39:59.995122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.017µs"
	I0730 00:40:00.271524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.136µs"
	I0730 00:40:02.448890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.141841ms"
	I0730 00:40:02.449079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.945µs"
	I0730 00:40:02.509077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.976048ms"
	I0730 00:40:02.509246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.514µs"
	I0730 00:40:03.166305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.867922ms"
	I0730 00:40:03.167683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.291074ms"
	E0730 00:40:35.439627       1 certificate_controller.go:146] Sync csr-8tbmw failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-8tbmw": the object has been modified; please apply your changes to the latest version and try again
	I0730 00:40:35.709258       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-161305-m04\" does not exist"
	I0730 00:40:35.738280       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-161305-m04" podCIDRs=["10.244.3.0/24"]
	I0730 00:40:36.477101       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305-m04"
	I0730 00:40:55.364420       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-161305-m04"
	I0730 00:41:56.519512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-161305-m04"
	I0730 00:41:56.643688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.500371ms"
	I0730 00:41:56.644361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.618µs"
	
	
	==> kube-proxy [1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2] <==
	I0730 00:37:22.378727       1 server_linux.go:69] "Using iptables proxy"
	I0730 00:37:22.393672       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.80"]
	I0730 00:37:22.514114       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:37:22.514175       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:37:22.514197       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:37:22.517669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:37:22.518064       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:37:22.518099       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:37:22.522742       1 config.go:192] "Starting service config controller"
	I0730 00:37:22.523094       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:37:22.523149       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:37:22.523158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:37:22.524314       1 config.go:319] "Starting node config controller"
	I0730 00:37:22.524343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:37:22.624532       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:37:22.624613       1 shared_informer.go:320] Caches are synced for service config
	I0730 00:37:22.625083       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12] <==
	I0730 00:39:58.905622       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="aafc31d9-59ed-4484-9345-b2c760317016" pod="default/busybox-fc5497c4f-v2pq7" assumedNode="ha-161305-m02" currentNode="ha-161305-m03"
	E0730 00:39:58.919299       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v2pq7\": pod busybox-fc5497c4f-v2pq7 is already assigned to node \"ha-161305-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v2pq7" node="ha-161305-m03"
	E0730 00:39:58.920013       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aafc31d9-59ed-4484-9345-b2c760317016(default/busybox-fc5497c4f-v2pq7) was assumed on ha-161305-m03 but assigned to ha-161305-m02" pod="default/busybox-fc5497c4f-v2pq7"
	E0730 00:39:58.920111       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v2pq7\": pod busybox-fc5497c4f-v2pq7 is already assigned to node \"ha-161305-m02\"" pod="default/busybox-fc5497c4f-v2pq7"
	I0730 00:39:58.920183       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v2pq7" node="ha-161305-m02"
	E0730 00:39:58.969637       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ttjx8\": pod busybox-fc5497c4f-ttjx8 is already assigned to node \"ha-161305\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-ttjx8" node="ha-161305"
	E0730 00:39:58.969705       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 93297df5-25c9-4722-8f86-668316a3d005(default/busybox-fc5497c4f-ttjx8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-ttjx8"
	E0730 00:39:58.969726       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ttjx8\": pod busybox-fc5497c4f-ttjx8 is already assigned to node \"ha-161305\"" pod="default/busybox-fc5497c4f-ttjx8"
	I0730 00:39:58.969751       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-ttjx8" node="ha-161305"
	E0730 00:39:58.975773       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-k6rhx\": pod busybox-fc5497c4f-k6rhx is already assigned to node \"ha-161305-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-k6rhx" node="ha-161305-m03"
	E0730 00:39:58.979457       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1c8de485-2ea1-454d-9b0d-aec913ebd0f5(default/busybox-fc5497c4f-k6rhx) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-k6rhx"
	E0730 00:39:58.980146       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-k6rhx\": pod busybox-fc5497c4f-k6rhx is already assigned to node \"ha-161305-m03\"" pod="default/busybox-fc5497c4f-k6rhx"
	I0730 00:39:58.980251       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-k6rhx" node="ha-161305-m03"
	E0730 00:40:35.786316       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f9bfb\": pod kube-proxy-f9bfb is already assigned to node \"ha-161305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f9bfb" node="ha-161305-m04"
	E0730 00:40:35.786430       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c223dc56-cf6b-4421-9070-f9b94d291026(kube-system/kube-proxy-f9bfb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-f9bfb"
	E0730 00:40:35.786455       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f9bfb\": pod kube-proxy-f9bfb is already assigned to node \"ha-161305-m04\"" pod="kube-system/kube-proxy-f9bfb"
	I0730 00:40:35.786482       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-f9bfb" node="ha-161305-m04"
	E0730 00:40:35.793184       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qvmll\": pod kindnet-qvmll is already assigned to node \"ha-161305-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qvmll" node="ha-161305-m04"
	E0730 00:40:35.793336       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 319a869c-bad1-4daa-8ac7-72163167c412(kube-system/kindnet-qvmll) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qvmll"
	E0730 00:40:35.793357       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qvmll\": pod kindnet-qvmll is already assigned to node \"ha-161305-m04\"" pod="kube-system/kindnet-qvmll"
	I0730 00:40:35.793400       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qvmll" node="ha-161305-m04"
	E0730 00:40:35.912231       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnx2t\": pod kindnet-pnx2t is already assigned to node \"ha-161305-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnx2t" node="ha-161305-m04"
	E0730 00:40:35.913077       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e12ff04f-f80b-4c33-b030-f515f22d607d(kube-system/kindnet-pnx2t) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pnx2t"
	E0730 00:40:35.913227       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnx2t\": pod kindnet-pnx2t is already assigned to node \"ha-161305-m04\"" pod="kube-system/kindnet-pnx2t"
	I0730 00:40:35.913334       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pnx2t" node="ha-161305-m04"
	
	
	==> kubelet <==
	Jul 30 00:39:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:39:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:39:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:39:58 ha-161305 kubelet[1372]: I0730 00:39:58.943008    1372 topology_manager.go:215] "Topology Admit Handler" podUID="93297df5-25c9-4722-8f86-668316a3d005" podNamespace="default" podName="busybox-fc5497c4f-ttjx8"
	Jul 30 00:39:59 ha-161305 kubelet[1372]: I0730 00:39:59.053659    1372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd5vl\" (UniqueName: \"kubernetes.io/projected/93297df5-25c9-4722-8f86-668316a3d005-kube-api-access-xd5vl\") pod \"busybox-fc5497c4f-ttjx8\" (UID: \"93297df5-25c9-4722-8f86-668316a3d005\") " pod="default/busybox-fc5497c4f-ttjx8"
	Jul 30 00:40:08 ha-161305 kubelet[1372]: E0730 00:40:08.372120    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:40:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:40:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:40:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:40:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:41:08 ha-161305 kubelet[1372]: E0730 00:41:08.373439    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:41:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:41:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:41:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:41:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:42:08 ha-161305 kubelet[1372]: E0730 00:42:08.372435    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:42:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:42:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:42:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:42:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:43:08 ha-161305 kubelet[1372]: E0730 00:43:08.372347    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:43:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:43:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:43:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:43:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-161305 -n ha-161305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-161305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
E0730 00:43:42.934726  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (3.207476749s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:43:41.277429  521595 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:43:41.277547  521595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:41.277556  521595 out.go:304] Setting ErrFile to fd 2...
	I0730 00:43:41.277570  521595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:41.277773  521595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:43:41.277967  521595 out.go:298] Setting JSON to false
	I0730 00:43:41.278000  521595 mustload.go:65] Loading cluster: ha-161305
	I0730 00:43:41.278121  521595 notify.go:220] Checking for updates...
	I0730 00:43:41.278501  521595 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:43:41.278528  521595 status.go:255] checking status of ha-161305 ...
	I0730 00:43:41.278947  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:41.279026  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:41.294741  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0730 00:43:41.295286  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:41.295905  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:41.295930  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:41.296258  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:41.296493  521595 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:43:41.298139  521595 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:43:41.298160  521595 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:41.298432  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:41.298492  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:41.313823  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0730 00:43:41.314315  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:41.314869  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:41.314902  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:41.315265  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:41.315436  521595 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:43:41.318687  521595 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:41.319306  521595 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:41.319344  521595 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:41.319519  521595 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:41.319813  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:41.319849  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:41.335248  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44845
	I0730 00:43:41.335652  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:41.336188  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:41.336219  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:41.336600  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:41.336838  521595 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:43:41.337068  521595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:41.337102  521595 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:43:41.339759  521595 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:41.340260  521595 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:41.340292  521595 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:41.340409  521595 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:43:41.340589  521595 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:43:41.340753  521595 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:43:41.340895  521595 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:43:41.428381  521595 ssh_runner.go:195] Run: systemctl --version
	I0730 00:43:41.437111  521595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:41.452386  521595 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:41.452414  521595 api_server.go:166] Checking apiserver status ...
	I0730 00:43:41.452449  521595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:41.465732  521595 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:43:41.474505  521595 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:41.474570  521595 ssh_runner.go:195] Run: ls
	I0730 00:43:41.479987  521595 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:41.484034  521595 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:41.484057  521595 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:43:41.484069  521595 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:41.484093  521595 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:43:41.484382  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:41.484431  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:41.500756  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I0730 00:43:41.501217  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:41.501731  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:41.501759  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:41.502142  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:41.502353  521595 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:43:41.504055  521595 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:43:41.504084  521595 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:41.504517  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:41.504567  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:41.521372  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0730 00:43:41.521871  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:41.522433  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:41.522468  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:41.522878  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:41.523088  521595 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:43:41.526280  521595 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:41.526737  521595 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:41.526765  521595 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:41.526860  521595 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:41.527174  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:41.527217  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:41.543603  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I0730 00:43:41.544130  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:41.544579  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:41.544597  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:41.544947  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:41.545110  521595 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:43:41.545323  521595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:41.545351  521595 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:43:41.548120  521595 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:41.548471  521595 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:41.548504  521595 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:41.548638  521595 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:43:41.548843  521595 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:43:41.548980  521595 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:43:41.549121  521595 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	W0730 00:43:44.081091  521595 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:43:44.081199  521595 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0730 00:43:44.081219  521595 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:44.081229  521595 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:43:44.081249  521595 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:44.081256  521595 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:43:44.081582  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:44.081626  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:44.096962  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42047
	I0730 00:43:44.097553  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:44.098080  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:44.098107  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:44.098453  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:44.098654  521595 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:43:44.100260  521595 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:43:44.100275  521595 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:44.100653  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:44.100721  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:44.115887  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44577
	I0730 00:43:44.116317  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:44.116803  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:44.116827  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:44.117174  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:44.117375  521595 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:43:44.120216  521595 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:44.120657  521595 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:44.120690  521595 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:44.120833  521595 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:44.121156  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:44.121191  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:44.136355  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0730 00:43:44.136784  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:44.137306  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:44.137333  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:44.137657  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:44.137869  521595 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:43:44.138119  521595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:44.138141  521595 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:43:44.140850  521595 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:44.141252  521595 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:44.141280  521595 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:44.141440  521595 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:43:44.141612  521595 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:43:44.141759  521595 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:43:44.141935  521595 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:43:44.223738  521595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:44.239396  521595 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:44.239426  521595 api_server.go:166] Checking apiserver status ...
	I0730 00:43:44.239461  521595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:44.255042  521595 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:43:44.265436  521595 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:44.265501  521595 ssh_runner.go:195] Run: ls
	I0730 00:43:44.269640  521595 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:44.276207  521595 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:44.276239  521595 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:43:44.276249  521595 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:44.276291  521595 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:43:44.276616  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:44.276661  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:44.292517  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0730 00:43:44.292942  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:44.293898  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:44.293928  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:44.294717  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:44.295031  521595 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:43:44.296655  521595 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:43:44.296674  521595 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:44.297027  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:44.297068  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:44.312977  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I0730 00:43:44.313480  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:44.313917  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:44.313940  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:44.314373  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:44.314616  521595 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:43:44.317839  521595 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:44.318369  521595 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:44.318404  521595 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:44.318512  521595 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:44.318801  521595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:44.318873  521595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:44.334381  521595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0730 00:43:44.334879  521595 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:44.335387  521595 main.go:141] libmachine: Using API Version  1
	I0730 00:43:44.335417  521595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:44.335738  521595 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:44.335930  521595 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:43:44.336136  521595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:44.336156  521595 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:43:44.338801  521595 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:44.339242  521595 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:44.339272  521595 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:44.339422  521595 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:43:44.339611  521595 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:43:44.339770  521595 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:43:44.339912  521595 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:43:44.424261  521595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:44.438766  521595 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (4.909686617s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:43:45.870987  521695 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:43:45.871486  521695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:45.871506  521695 out.go:304] Setting ErrFile to fd 2...
	I0730 00:43:45.871514  521695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:45.871990  521695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:43:45.872475  521695 out.go:298] Setting JSON to false
	I0730 00:43:45.872515  521695 mustload.go:65] Loading cluster: ha-161305
	I0730 00:43:45.872619  521695 notify.go:220] Checking for updates...
	I0730 00:43:45.873062  521695 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:43:45.873084  521695 status.go:255] checking status of ha-161305 ...
	I0730 00:43:45.873646  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:45.873693  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:45.889964  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0730 00:43:45.890380  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:45.891025  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:45.891058  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:45.891514  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:45.891748  521695 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:43:45.893406  521695 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:43:45.893424  521695 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:45.893735  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:45.893789  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:45.909191  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0730 00:43:45.909641  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:45.910201  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:45.910231  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:45.910605  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:45.910840  521695 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:43:45.914022  521695 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:45.914465  521695 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:45.914500  521695 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:45.914658  521695 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:45.915139  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:45.915191  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:45.931978  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0730 00:43:45.932457  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:45.932945  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:45.932969  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:45.933299  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:45.933477  521695 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:43:45.933653  521695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:45.933689  521695 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:43:45.936686  521695 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:45.937172  521695 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:45.937201  521695 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:45.937308  521695 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:43:45.937439  521695 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:43:45.937578  521695 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:43:45.937711  521695 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:43:46.027915  521695 ssh_runner.go:195] Run: systemctl --version
	I0730 00:43:46.034054  521695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:46.049202  521695 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:46.049236  521695 api_server.go:166] Checking apiserver status ...
	I0730 00:43:46.049278  521695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:46.065957  521695 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:43:46.079228  521695 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:46.079300  521695 ssh_runner.go:195] Run: ls
	I0730 00:43:46.083365  521695 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:46.087972  521695 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:46.087999  521695 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:43:46.088013  521695 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:46.088042  521695 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:43:46.088465  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:46.088514  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:46.104249  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37741
	I0730 00:43:46.104810  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:46.105442  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:46.105473  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:46.105863  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:46.106035  521695 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:43:46.107845  521695 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:43:46.107863  521695 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:46.108179  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:46.108217  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:46.123107  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0730 00:43:46.123488  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:46.123961  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:46.123982  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:46.124263  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:46.124459  521695 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:43:46.127226  521695 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:46.127627  521695 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:46.127652  521695 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:46.127846  521695 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:46.128124  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:46.128164  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:46.149363  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I0730 00:43:46.149867  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:46.150390  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:46.150428  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:46.150772  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:46.150964  521695 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:43:46.151165  521695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:46.151188  521695 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:43:46.154172  521695 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:46.154624  521695 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:46.154655  521695 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:46.154791  521695 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:43:46.154972  521695 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:43:46.155096  521695 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:43:46.155260  521695 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	W0730 00:43:47.157049  521695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:47.157135  521695 retry.go:31] will retry after 151.784962ms: dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:43:50.384985  521695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:43:50.385119  521695 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0730 00:43:50.385141  521695 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:50.385157  521695 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:43:50.385180  521695 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:50.385187  521695 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:43:50.385509  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:50.385550  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:50.401412  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0730 00:43:50.401889  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:50.402492  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:50.402529  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:50.402905  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:50.403119  521695 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:43:50.405027  521695 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:43:50.405043  521695 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:50.405324  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:50.405360  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:50.420679  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I0730 00:43:50.421149  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:50.421625  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:50.421644  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:50.421923  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:50.422116  521695 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:43:50.424723  521695 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:50.425111  521695 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:50.425136  521695 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:50.425295  521695 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:50.425624  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:50.425669  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:50.440201  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0730 00:43:50.440683  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:50.441185  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:50.441209  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:50.441520  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:50.441696  521695 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:43:50.441891  521695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:50.441910  521695 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:43:50.444585  521695 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:50.445008  521695 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:50.445046  521695 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:50.445184  521695 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:43:50.445356  521695 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:43:50.445477  521695 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:43:50.445623  521695 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:43:50.532332  521695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:50.546759  521695 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:50.546794  521695 api_server.go:166] Checking apiserver status ...
	I0730 00:43:50.546834  521695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:50.560533  521695 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:43:50.569673  521695 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:50.569725  521695 ssh_runner.go:195] Run: ls
	I0730 00:43:50.573623  521695 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:50.577892  521695 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:50.577912  521695 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:43:50.577921  521695 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:50.577937  521695 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:43:50.578213  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:50.578258  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:50.593836  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0730 00:43:50.594261  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:50.594730  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:50.594751  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:50.595153  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:50.595366  521695 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:43:50.597179  521695 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:43:50.597197  521695 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:50.597468  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:50.597502  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:50.612959  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
	I0730 00:43:50.613413  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:50.613866  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:50.613895  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:50.614215  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:50.614428  521695 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:43:50.616935  521695 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:50.617387  521695 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:50.617424  521695 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:50.617561  521695 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:50.617845  521695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:50.617892  521695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:50.632800  521695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0730 00:43:50.633291  521695 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:50.633880  521695 main.go:141] libmachine: Using API Version  1
	I0730 00:43:50.633908  521695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:50.634328  521695 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:50.634504  521695 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:43:50.634713  521695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:50.634734  521695 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:43:50.637728  521695 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:50.638161  521695 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:50.638187  521695 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:50.638406  521695 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:43:50.638591  521695 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:43:50.638758  521695 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:43:50.638902  521695 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:43:50.719997  521695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:50.734856  521695 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
E0730 00:43:53.927499  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (5.078246574s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:43:51.845425  521796 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:43:51.845674  521796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:51.845683  521796 out.go:304] Setting ErrFile to fd 2...
	I0730 00:43:51.845687  521796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:51.845870  521796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:43:51.846100  521796 out.go:298] Setting JSON to false
	I0730 00:43:51.846132  521796 mustload.go:65] Loading cluster: ha-161305
	I0730 00:43:51.846202  521796 notify.go:220] Checking for updates...
	I0730 00:43:51.846599  521796 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:43:51.846622  521796 status.go:255] checking status of ha-161305 ...
	I0730 00:43:51.847099  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:51.847178  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:51.865734  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0730 00:43:51.866418  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:51.867176  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:51.867211  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:51.867613  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:51.867811  521796 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:43:51.869502  521796 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:43:51.869520  521796 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:51.869795  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:51.869836  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:51.887039  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0730 00:43:51.887556  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:51.888175  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:51.888234  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:51.888662  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:51.888902  521796 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:43:51.892000  521796 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:51.892494  521796 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:51.892526  521796 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:51.892729  521796 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:51.893087  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:51.893132  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:51.908582  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0730 00:43:51.909060  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:51.909571  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:51.909593  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:51.909886  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:51.910072  521796 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:43:51.910276  521796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:51.910310  521796 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:43:51.913362  521796 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:51.913724  521796 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:51.913752  521796 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:51.913960  521796 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:43:51.914164  521796 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:43:51.914368  521796 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:43:51.914525  521796 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:43:51.998131  521796 ssh_runner.go:195] Run: systemctl --version
	I0730 00:43:52.006487  521796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:52.028418  521796 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:52.028451  521796 api_server.go:166] Checking apiserver status ...
	I0730 00:43:52.028492  521796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:52.048490  521796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:43:52.060101  521796 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:52.060156  521796 ssh_runner.go:195] Run: ls
	I0730 00:43:52.064373  521796 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:52.068680  521796 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:52.068714  521796 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:43:52.068732  521796 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:52.068755  521796 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:43:52.069070  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:52.069106  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:52.084728  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0730 00:43:52.085404  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:52.085858  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:52.085876  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:52.086202  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:52.086413  521796 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:43:52.088082  521796 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:43:52.088101  521796 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:52.088456  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:52.088504  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:52.104008  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0730 00:43:52.104393  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:52.104941  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:52.104965  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:52.105323  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:52.105524  521796 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:43:52.108343  521796 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:52.108942  521796 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:52.108971  521796 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:52.109125  521796 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:52.109420  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:52.109463  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:52.125030  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0730 00:43:52.125487  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:52.125951  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:52.125982  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:52.126316  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:52.126546  521796 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:43:52.126757  521796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:52.126779  521796 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:43:52.129466  521796 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:52.129849  521796 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:52.129878  521796 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:52.130056  521796 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:43:52.130235  521796 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:43:52.130444  521796 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:43:52.130587  521796 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	W0730 00:43:53.461040  521796 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:53.461111  521796 retry.go:31] will retry after 172.547955ms: dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:43:56.529119  521796 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:43:56.529297  521796 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0730 00:43:56.529328  521796 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:56.529342  521796 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:43:56.529372  521796 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:56.529387  521796 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:43:56.529730  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:56.529802  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:56.545960  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I0730 00:43:56.546388  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:56.546842  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:56.546866  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:56.547219  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:56.547418  521796 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:43:56.549022  521796 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:43:56.549039  521796 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:56.549381  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:56.549424  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:56.564924  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I0730 00:43:56.565384  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:56.565840  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:56.565861  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:56.566210  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:56.566411  521796 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:43:56.568905  521796 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:56.569291  521796 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:56.569330  521796 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:56.569431  521796 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:43:56.569732  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:56.569760  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:56.584511  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0730 00:43:56.584958  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:56.585461  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:56.585492  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:56.585842  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:56.586056  521796 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:43:56.586220  521796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:56.586237  521796 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:43:56.588663  521796 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:56.589190  521796 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:43:56.589229  521796 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:43:56.589369  521796 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:43:56.589567  521796 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:43:56.589719  521796 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:43:56.589867  521796 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:43:56.672173  521796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:56.686419  521796 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:56.686462  521796 api_server.go:166] Checking apiserver status ...
	I0730 00:43:56.686506  521796 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:56.700656  521796 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:43:56.710115  521796 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:56.710181  521796 ssh_runner.go:195] Run: ls
	I0730 00:43:56.714284  521796 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:56.718890  521796 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:56.718916  521796 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:43:56.718929  521796 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:56.718957  521796 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:43:56.719339  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:56.719372  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:56.738051  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0730 00:43:56.738662  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:56.739195  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:56.739222  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:56.739565  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:56.739767  521796 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:43:56.741423  521796 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:43:56.741440  521796 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:56.741797  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:56.741828  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:56.758625  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43555
	I0730 00:43:56.759035  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:56.759487  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:56.759510  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:56.759849  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:56.760048  521796 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:43:56.762660  521796 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:56.763085  521796 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:56.763112  521796 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:56.763235  521796 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:43:56.763554  521796 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:56.763600  521796 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:56.778492  521796 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36131
	I0730 00:43:56.778933  521796 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:56.779388  521796 main.go:141] libmachine: Using API Version  1
	I0730 00:43:56.779408  521796 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:56.779716  521796 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:56.779888  521796 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:43:56.780077  521796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:56.780104  521796 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:43:56.782648  521796 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:56.783023  521796 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:43:56.783043  521796 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:43:56.783147  521796 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:43:56.783318  521796 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:43:56.783466  521796 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:43:56.783609  521796 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:43:56.863531  521796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:56.878112  521796 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (4.650297101s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:43:58.770027  521897 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:43:58.770455  521897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:58.770562  521897 out.go:304] Setting ErrFile to fd 2...
	I0730 00:43:58.770588  521897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:43:58.771049  521897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:43:58.771540  521897 out.go:298] Setting JSON to false
	I0730 00:43:58.771569  521897 mustload.go:65] Loading cluster: ha-161305
	I0730 00:43:58.771710  521897 notify.go:220] Checking for updates...
	I0730 00:43:58.771963  521897 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:43:58.771978  521897 status.go:255] checking status of ha-161305 ...
	I0730 00:43:58.772414  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:58.772452  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:58.788696  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0730 00:43:58.789182  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:58.789724  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:43:58.789747  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:58.790124  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:58.790325  521897 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:43:58.791854  521897 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:43:58.791872  521897 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:58.792242  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:58.792283  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:58.808576  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35327
	I0730 00:43:58.809029  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:58.809578  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:43:58.809613  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:58.809937  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:58.810164  521897 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:43:58.813436  521897 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:58.813829  521897 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:58.813865  521897 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:58.814013  521897 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:43:58.814326  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:58.814371  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:58.831129  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0730 00:43:58.831680  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:58.832217  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:43:58.832244  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:58.832633  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:58.832854  521897 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:43:58.833082  521897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:58.833107  521897 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:43:58.835953  521897 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:58.836439  521897 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:43:58.836473  521897 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:43:58.836598  521897 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:43:58.836796  521897 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:43:58.837007  521897 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:43:58.837163  521897 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:43:58.924517  521897 ssh_runner.go:195] Run: systemctl --version
	I0730 00:43:58.930728  521897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:43:58.946335  521897 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:43:58.946364  521897 api_server.go:166] Checking apiserver status ...
	I0730 00:43:58.946406  521897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:43:58.962030  521897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:43:58.974398  521897 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:43:58.974466  521897 ssh_runner.go:195] Run: ls
	I0730 00:43:58.978946  521897 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:43:58.985899  521897 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:43:58.985936  521897 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:43:58.985952  521897 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:43:58.985975  521897 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:43:58.986288  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:58.986324  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:59.002071  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0730 00:43:59.002620  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:59.003207  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:43:59.003251  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:59.003605  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:59.003811  521897 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:43:59.005699  521897 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:43:59.005717  521897 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:59.005991  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:59.006031  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:59.021338  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0730 00:43:59.021769  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:59.024404  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:43:59.024578  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:59.025001  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:59.025274  521897 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:43:59.028847  521897 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:59.029336  521897 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:59.029369  521897 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:59.029519  521897 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:43:59.029857  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:43:59.029903  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:43:59.045369  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0730 00:43:59.045843  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:43:59.046361  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:43:59.046387  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:43:59.046752  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:43:59.046956  521897 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:43:59.047177  521897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:43:59.047200  521897 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:43:59.050081  521897 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:59.050522  521897 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:43:59.050550  521897 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:43:59.050795  521897 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:43:59.050957  521897 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:43:59.051141  521897 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:43:59.051289  521897 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	W0730 00:43:59.601016  521897 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:43:59.601124  521897 retry.go:31] will retry after 343.495462ms: dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:44:03.025067  521897 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:44:03.025208  521897 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0730 00:44:03.025240  521897 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:44:03.025253  521897 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:44:03.025281  521897 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:44:03.025300  521897 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:44:03.025661  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:03.025735  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:03.041245  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I0730 00:44:03.041741  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:03.042244  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:44:03.042269  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:03.042639  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:03.042854  521897 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:44:03.044578  521897 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:44:03.044596  521897 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:03.044943  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:03.045070  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:03.060752  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0730 00:44:03.061281  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:03.061866  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:44:03.061885  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:03.062307  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:03.062572  521897 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:44:03.065512  521897 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:03.065925  521897 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:03.065957  521897 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:03.066091  521897 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:03.066377  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:03.066415  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:03.081994  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40187
	I0730 00:44:03.082472  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:03.082968  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:44:03.082995  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:03.083382  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:03.083602  521897 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:44:03.083795  521897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:03.083822  521897 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:44:03.086865  521897 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:03.087262  521897 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:03.087297  521897 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:03.087423  521897 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:44:03.087606  521897 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:44:03.087774  521897 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:44:03.087927  521897 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:44:03.168095  521897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:03.183351  521897 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:03.183386  521897 api_server.go:166] Checking apiserver status ...
	I0730 00:44:03.183421  521897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:03.196701  521897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:44:03.207300  521897 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:03.207375  521897 ssh_runner.go:195] Run: ls
	I0730 00:44:03.211555  521897 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:03.215747  521897 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:03.215775  521897 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:44:03.215787  521897 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:03.215804  521897 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:44:03.216237  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:03.216276  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:03.232047  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0730 00:44:03.232588  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:03.233074  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:44:03.233097  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:03.233446  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:03.233660  521897 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:44:03.235197  521897 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:44:03.235215  521897 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:03.235485  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:03.235517  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:03.251645  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I0730 00:44:03.252235  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:03.252758  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:44:03.252784  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:03.253112  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:03.253293  521897 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:44:03.255833  521897 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:03.256259  521897 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:03.256292  521897 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:03.256386  521897 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:03.256723  521897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:03.256769  521897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:03.271821  521897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0730 00:44:03.272315  521897 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:03.272882  521897 main.go:141] libmachine: Using API Version  1
	I0730 00:44:03.272909  521897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:03.273227  521897 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:03.273426  521897 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:44:03.273616  521897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:03.273634  521897 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:44:03.276550  521897 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:03.277036  521897 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:03.277066  521897 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:03.277209  521897 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:44:03.277373  521897 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:44:03.277511  521897 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:44:03.277644  521897 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:44:03.359421  521897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:03.374553  521897 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (3.75861976s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:44:08.104119  522013 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:44:08.104364  522013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:08.104372  522013 out.go:304] Setting ErrFile to fd 2...
	I0730 00:44:08.104376  522013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:08.104578  522013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:44:08.104773  522013 out.go:298] Setting JSON to false
	I0730 00:44:08.104809  522013 mustload.go:65] Loading cluster: ha-161305
	I0730 00:44:08.104914  522013 notify.go:220] Checking for updates...
	I0730 00:44:08.105239  522013 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:44:08.105261  522013 status.go:255] checking status of ha-161305 ...
	I0730 00:44:08.105732  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:08.105857  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:08.126010  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0730 00:44:08.126467  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:08.127081  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:08.127113  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:08.127499  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:08.127720  522013 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:44:08.129408  522013 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:44:08.129427  522013 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:08.129829  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:08.129876  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:08.145570  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0730 00:44:08.146045  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:08.146519  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:08.146536  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:08.146951  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:08.147185  522013 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:44:08.150606  522013 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:08.151102  522013 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:08.151147  522013 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:08.151319  522013 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:08.151801  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:08.151868  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:08.174524  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0730 00:44:08.175052  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:08.175597  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:08.175615  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:08.176013  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:08.176260  522013 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:44:08.176478  522013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:08.176527  522013 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:44:08.179922  522013 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:08.181925  522013 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:08.181960  522013 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:08.181980  522013 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:44:08.182345  522013 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:44:08.182795  522013 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:44:08.183216  522013 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:44:08.272232  522013 ssh_runner.go:195] Run: systemctl --version
	I0730 00:44:08.279160  522013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:08.294483  522013 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:08.294523  522013 api_server.go:166] Checking apiserver status ...
	I0730 00:44:08.294569  522013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:08.310790  522013 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:44:08.320128  522013 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:08.320186  522013 ssh_runner.go:195] Run: ls
	I0730 00:44:08.324395  522013 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:08.329835  522013 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:08.329865  522013 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:44:08.329878  522013 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:08.329908  522013 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:44:08.330382  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:08.330432  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:08.346900  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0730 00:44:08.347413  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:08.347879  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:08.347904  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:08.348210  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:08.348398  522013 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:44:08.350093  522013 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:44:08.350110  522013 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:44:08.350401  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:08.350438  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:08.366380  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45353
	I0730 00:44:08.366800  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:08.367424  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:08.367453  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:08.367745  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:08.368192  522013 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:44:08.371449  522013 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:08.371951  522013 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:44:08.371980  522013 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:08.372141  522013 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:44:08.372436  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:08.372475  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:08.391830  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0730 00:44:08.392426  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:08.393037  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:08.393065  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:08.393375  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:08.393579  522013 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:44:08.393787  522013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:08.393815  522013 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:44:08.396849  522013 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:08.397334  522013 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:44:08.397376  522013 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:08.397517  522013 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:44:08.397691  522013 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:44:08.397963  522013 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:44:08.398132  522013 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	W0730 00:44:11.473000  522013 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:44:11.473135  522013 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0730 00:44:11.473159  522013 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:44:11.473171  522013 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:44:11.473201  522013 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:44:11.473212  522013 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:44:11.473572  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:11.473615  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:11.490659  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44321
	I0730 00:44:11.491209  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:11.491674  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:11.491697  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:11.492070  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:11.492289  522013 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:44:11.494002  522013 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:44:11.494021  522013 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:11.494433  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:11.494477  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:11.509343  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0730 00:44:11.509836  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:11.510321  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:11.510344  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:11.510660  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:11.510840  522013 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:44:11.513557  522013 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:11.513940  522013 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:11.513970  522013 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:11.514098  522013 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:11.514404  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:11.514450  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:11.529126  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45385
	I0730 00:44:11.529595  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:11.530018  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:11.530039  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:11.530386  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:11.530557  522013 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:44:11.530746  522013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:11.530766  522013 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:44:11.533187  522013 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:11.533515  522013 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:11.533542  522013 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:11.533681  522013 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:44:11.533867  522013 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:44:11.534047  522013 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:44:11.534177  522013 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:44:11.617610  522013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:11.635139  522013 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:11.635170  522013 api_server.go:166] Checking apiserver status ...
	I0730 00:44:11.635203  522013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:11.647619  522013 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:44:11.656855  522013 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:11.656917  522013 ssh_runner.go:195] Run: ls
	I0730 00:44:11.660903  522013 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:11.666399  522013 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:11.666423  522013 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:44:11.666432  522013 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:11.666447  522013 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:44:11.666756  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:11.666800  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:11.681887  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0730 00:44:11.682357  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:11.682812  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:11.682835  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:11.683164  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:11.683343  522013 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:44:11.684907  522013 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:44:11.684924  522013 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:11.685249  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:11.685289  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:11.699747  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0730 00:44:11.700197  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:11.700657  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:11.700675  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:11.700982  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:11.701194  522013 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:44:11.703843  522013 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:11.704229  522013 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:11.704259  522013 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:11.704326  522013 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:11.704598  522013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:11.704619  522013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:11.719645  522013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0730 00:44:11.720144  522013 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:11.720663  522013 main.go:141] libmachine: Using API Version  1
	I0730 00:44:11.720690  522013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:11.721018  522013 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:11.721223  522013 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:44:11.721425  522013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:11.721448  522013 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:44:11.723625  522013 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:11.724008  522013 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:11.724032  522013 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:11.724172  522013 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:44:11.724354  522013 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:44:11.724504  522013 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:44:11.724645  522013 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:44:11.803437  522013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:11.816172  522013 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (3.744117462s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:44:14.789163  522130 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:44:14.789460  522130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:14.789472  522130 out.go:304] Setting ErrFile to fd 2...
	I0730 00:44:14.789478  522130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:14.789710  522130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:44:14.789889  522130 out.go:298] Setting JSON to false
	I0730 00:44:14.789921  522130 mustload.go:65] Loading cluster: ha-161305
	I0730 00:44:14.790041  522130 notify.go:220] Checking for updates...
	I0730 00:44:14.790349  522130 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:44:14.790368  522130 status.go:255] checking status of ha-161305 ...
	I0730 00:44:14.790793  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:14.790861  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:14.810297  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0730 00:44:14.810747  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:14.811459  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:14.811489  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:14.811913  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:14.812139  522130 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:44:14.813853  522130 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:44:14.813872  522130 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:14.814190  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:14.814244  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:14.829658  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0730 00:44:14.830091  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:14.830582  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:14.830605  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:14.830975  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:14.831180  522130 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:44:14.834546  522130 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:14.835019  522130 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:14.835050  522130 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:14.835209  522130 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:14.835624  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:14.835677  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:14.850928  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33609
	I0730 00:44:14.851528  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:14.852102  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:14.852131  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:14.852494  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:14.852744  522130 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:44:14.852945  522130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:14.852976  522130 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:44:14.855996  522130 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:14.856487  522130 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:14.856518  522130 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:14.856753  522130 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:44:14.856938  522130 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:44:14.857129  522130 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:44:14.857235  522130 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:44:14.946121  522130 ssh_runner.go:195] Run: systemctl --version
	I0730 00:44:14.953559  522130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:14.969320  522130 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:14.969361  522130 api_server.go:166] Checking apiserver status ...
	I0730 00:44:14.969405  522130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:14.984166  522130 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:44:14.994504  522130 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:14.994571  522130 ssh_runner.go:195] Run: ls
	I0730 00:44:14.999094  522130 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:15.003299  522130 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:15.003341  522130 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:44:15.003374  522130 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:15.003404  522130 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:44:15.003858  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:15.003914  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:15.020870  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0730 00:44:15.021341  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:15.021888  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:15.021919  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:15.022370  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:15.022587  522130 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:44:15.024029  522130 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:44:15.024051  522130 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:44:15.024493  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:15.024544  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:15.040669  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0730 00:44:15.041248  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:15.041773  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:15.041797  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:15.042130  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:15.042340  522130 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:44:15.045581  522130 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:15.046072  522130 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:44:15.046103  522130 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:15.046331  522130 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:44:15.046615  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:15.048511  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:15.065637  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I0730 00:44:15.066074  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:15.066672  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:15.066702  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:15.067091  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:15.067294  522130 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:44:15.067514  522130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:15.067541  522130 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:44:15.070773  522130 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:15.071297  522130 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:44:15.071332  522130 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:44:15.071498  522130 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:44:15.071703  522130 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:44:15.071846  522130 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:44:15.072007  522130 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	W0730 00:44:18.132966  522130 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0730 00:44:18.133139  522130 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0730 00:44:18.133173  522130 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:44:18.133186  522130 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:44:18.133212  522130 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0730 00:44:18.133226  522130 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:44:18.133660  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:18.133723  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:18.148997  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35515
	I0730 00:44:18.149534  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:18.150055  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:18.150092  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:18.150397  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:18.150605  522130 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:44:18.152458  522130 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:44:18.152478  522130 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:18.152810  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:18.152857  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:18.168529  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I0730 00:44:18.169021  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:18.169501  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:18.169525  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:18.169838  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:18.170024  522130 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:44:18.172804  522130 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:18.173232  522130 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:18.173261  522130 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:18.173509  522130 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:18.173806  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:18.173846  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:18.189409  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0730 00:44:18.189837  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:18.190325  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:18.190345  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:18.190654  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:18.190867  522130 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:44:18.191075  522130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:18.191101  522130 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:44:18.194006  522130 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:18.194467  522130 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:18.194496  522130 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:18.194618  522130 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:44:18.194782  522130 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:44:18.194945  522130 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:44:18.195088  522130 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:44:18.276732  522130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:18.297260  522130 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:18.297291  522130 api_server.go:166] Checking apiserver status ...
	I0730 00:44:18.297326  522130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:18.313049  522130 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:44:18.322829  522130 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:18.322895  522130 ssh_runner.go:195] Run: ls
	I0730 00:44:18.326963  522130 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:18.331170  522130 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:18.331192  522130 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:44:18.331202  522130 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:18.331218  522130 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:44:18.331501  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:18.331540  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:18.347330  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0730 00:44:18.347834  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:18.348343  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:18.348381  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:18.348786  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:18.348974  522130 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:44:18.350580  522130 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:44:18.350599  522130 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:18.350882  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:18.350917  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:18.366019  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0730 00:44:18.366430  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:18.366963  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:18.366987  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:18.367344  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:18.367570  522130 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:44:18.370284  522130 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:18.370692  522130 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:18.370721  522130 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:18.370877  522130 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:18.371298  522130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:18.371367  522130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:18.386788  522130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0730 00:44:18.387274  522130 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:18.387804  522130 main.go:141] libmachine: Using API Version  1
	I0730 00:44:18.387825  522130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:18.388179  522130 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:18.388406  522130 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:44:18.388579  522130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:18.388601  522130 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:44:18.391192  522130 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:18.391623  522130 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:18.391650  522130 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:18.391780  522130 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:44:18.391956  522130 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:44:18.392142  522130 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:44:18.392281  522130 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:44:18.472368  522130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:18.486490  522130 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 7 (638.424007ms)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:44:22.346485  522250 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:44:22.346759  522250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:22.346769  522250 out.go:304] Setting ErrFile to fd 2...
	I0730 00:44:22.346773  522250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:22.346973  522250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:44:22.347150  522250 out.go:298] Setting JSON to false
	I0730 00:44:22.347178  522250 mustload.go:65] Loading cluster: ha-161305
	I0730 00:44:22.347294  522250 notify.go:220] Checking for updates...
	I0730 00:44:22.347689  522250 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:44:22.347713  522250 status.go:255] checking status of ha-161305 ...
	I0730 00:44:22.348242  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.348311  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.367366  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I0730 00:44:22.367890  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.368649  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.368691  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.369123  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.369352  522250 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:44:22.371271  522250 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:44:22.371292  522250 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:22.371676  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.371722  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.389395  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43671
	I0730 00:44:22.389865  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.390406  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.390433  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.390703  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.390866  522250 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:44:22.393813  522250 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:22.394230  522250 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:22.394272  522250 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:22.394364  522250 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:22.394701  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.394737  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.410212  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0730 00:44:22.410645  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.411184  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.411205  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.411544  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.411706  522250 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:44:22.411913  522250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:22.411950  522250 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:44:22.414976  522250 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:22.415283  522250 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:22.415342  522250 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:22.415585  522250 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:44:22.415792  522250 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:44:22.415948  522250 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:44:22.416144  522250 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:44:22.502266  522250 ssh_runner.go:195] Run: systemctl --version
	I0730 00:44:22.508625  522250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:22.524430  522250 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:22.524470  522250 api_server.go:166] Checking apiserver status ...
	I0730 00:44:22.524514  522250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:22.541080  522250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:44:22.555163  522250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:22.555223  522250 ssh_runner.go:195] Run: ls
	I0730 00:44:22.561531  522250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:22.565450  522250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:22.565480  522250 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:44:22.565494  522250 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:22.565516  522250 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:44:22.565829  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.565866  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.581647  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39297
	I0730 00:44:22.582098  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.582577  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.582602  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.582927  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.583134  522250 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:44:22.584831  522250 status.go:330] ha-161305-m02 host status = "Stopped" (err=<nil>)
	I0730 00:44:22.584844  522250 status.go:343] host is not running, skipping remaining checks
	I0730 00:44:22.584851  522250 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:22.584866  522250 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:44:22.585166  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.585206  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.602499  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32891
	I0730 00:44:22.602957  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.603527  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.603554  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.606075  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.606712  522250 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:44:22.608559  522250 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:44:22.608583  522250 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:22.608919  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.608959  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.624178  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0730 00:44:22.624585  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.625071  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.625094  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.625459  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.625675  522250 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:44:22.628187  522250 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:22.628621  522250 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:22.628653  522250 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:22.628884  522250 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:22.629202  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.629268  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.644828  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46683
	I0730 00:44:22.645397  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.646062  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.646086  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.646427  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.646659  522250 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:44:22.646857  522250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:22.646884  522250 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:44:22.649534  522250 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:22.649924  522250 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:22.649949  522250 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:22.650191  522250 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:44:22.650443  522250 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:44:22.650584  522250 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:44:22.650697  522250 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:44:22.733547  522250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:22.749199  522250 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:22.749229  522250 api_server.go:166] Checking apiserver status ...
	I0730 00:44:22.749261  522250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:22.763151  522250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:44:22.772739  522250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:22.772800  522250 ssh_runner.go:195] Run: ls
	I0730 00:44:22.777028  522250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:22.781008  522250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:22.781039  522250 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:44:22.781053  522250 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:22.781070  522250 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:44:22.781500  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.781541  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.797669  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
	I0730 00:44:22.798127  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.798558  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.798578  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.798833  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.799041  522250 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:44:22.800272  522250 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:44:22.800296  522250 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:22.800642  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.800687  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.815618  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43397
	I0730 00:44:22.816033  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.816493  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.816514  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.816897  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.817122  522250 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:44:22.819791  522250 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:22.820205  522250 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:22.820226  522250 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:22.820415  522250 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:22.820810  522250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:22.820856  522250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:22.836100  522250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0730 00:44:22.836506  522250 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:22.837007  522250 main.go:141] libmachine: Using API Version  1
	I0730 00:44:22.837035  522250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:22.837346  522250 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:22.837534  522250 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:44:22.837752  522250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:22.837781  522250 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:44:22.840263  522250 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:22.840641  522250 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:22.840665  522250 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:22.840804  522250 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:44:22.840966  522250 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:44:22.841133  522250 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:44:22.841284  522250 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:44:22.923623  522250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:22.936520  522250 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 7 (636.437885ms)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-161305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:44:32.486195  522355 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:44:32.486306  522355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:32.486313  522355 out.go:304] Setting ErrFile to fd 2...
	I0730 00:44:32.486317  522355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:32.486509  522355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:44:32.486692  522355 out.go:298] Setting JSON to false
	I0730 00:44:32.486721  522355 mustload.go:65] Loading cluster: ha-161305
	I0730 00:44:32.486777  522355 notify.go:220] Checking for updates...
	I0730 00:44:32.487085  522355 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:44:32.487099  522355 status.go:255] checking status of ha-161305 ...
	I0730 00:44:32.487443  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.487517  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.507271  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38763
	I0730 00:44:32.507704  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.508354  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.508383  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.508816  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.509057  522355 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:44:32.510687  522355 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:44:32.510706  522355 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:32.511060  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.511103  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.527394  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0730 00:44:32.527875  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.528453  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.528485  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.528825  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.529043  522355 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:44:32.532372  522355 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:32.532876  522355 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:32.532904  522355 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:32.533035  522355 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:44:32.533345  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.533381  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.548634  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I0730 00:44:32.549056  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.549527  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.549551  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.549875  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.550108  522355 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:44:32.550336  522355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:32.550364  522355 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:44:32.553099  522355 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:32.553587  522355 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:44:32.553607  522355 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:44:32.553802  522355 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:44:32.553988  522355 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:44:32.554127  522355 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:44:32.554228  522355 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:44:32.641087  522355 ssh_runner.go:195] Run: systemctl --version
	I0730 00:44:32.654103  522355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:32.671636  522355 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:32.671672  522355 api_server.go:166] Checking apiserver status ...
	I0730 00:44:32.671718  522355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:32.690220  522355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0730 00:44:32.701752  522355 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:32.701817  522355 ssh_runner.go:195] Run: ls
	I0730 00:44:32.706303  522355 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:32.711365  522355 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:32.711388  522355 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:44:32.711398  522355 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:32.711416  522355 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:44:32.711702  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.711739  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.727135  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0730 00:44:32.727586  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.728038  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.728057  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.728393  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.728552  522355 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:44:32.730141  522355 status.go:330] ha-161305-m02 host status = "Stopped" (err=<nil>)
	I0730 00:44:32.730158  522355 status.go:343] host is not running, skipping remaining checks
	I0730 00:44:32.730166  522355 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:32.730190  522355 status.go:255] checking status of ha-161305-m03 ...
	I0730 00:44:32.730467  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.730504  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.745034  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0730 00:44:32.745489  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.745994  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.746019  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.746400  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.746580  522355 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:44:32.748217  522355 status.go:330] ha-161305-m03 host status = "Running" (err=<nil>)
	I0730 00:44:32.748238  522355 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:32.748518  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.748557  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.764702  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0730 00:44:32.765132  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.765636  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.765684  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.766032  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.766272  522355 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:44:32.769278  522355 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:32.769661  522355 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:32.769687  522355 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:32.769873  522355 host.go:66] Checking if "ha-161305-m03" exists ...
	I0730 00:44:32.770173  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.770216  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.785363  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0730 00:44:32.785944  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.786455  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.786475  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.786799  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.787003  522355 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:44:32.787205  522355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:32.787229  522355 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:44:32.789893  522355 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:32.790313  522355 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:32.790345  522355 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:32.790553  522355 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:44:32.790687  522355 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:44:32.790852  522355 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:44:32.790995  522355 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:44:32.872670  522355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:32.886664  522355 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:44:32.886712  522355 api_server.go:166] Checking apiserver status ...
	I0730 00:44:32.886761  522355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:44:32.900410  522355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup
	W0730 00:44:32.909394  522355 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1604/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:44:32.909444  522355 ssh_runner.go:195] Run: ls
	I0730 00:44:32.913742  522355 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:44:32.921051  522355 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:44:32.921077  522355 status.go:422] ha-161305-m03 apiserver status = Running (err=<nil>)
	I0730 00:44:32.921085  522355 status.go:257] ha-161305-m03 status: &{Name:ha-161305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:44:32.921110  522355 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:44:32.921399  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.921434  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.936915  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0730 00:44:32.937485  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.938019  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.938060  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.938390  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.938626  522355 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:44:32.940168  522355 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:44:32.940186  522355 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:32.940466  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.940503  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.956353  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I0730 00:44:32.956784  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.957239  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.957260  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.957586  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.957774  522355 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:44:32.960449  522355 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:32.960935  522355 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:32.960961  522355 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:32.961181  522355 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:44:32.961491  522355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:32.961531  522355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:32.976321  522355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46533
	I0730 00:44:32.976763  522355 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:32.977234  522355 main.go:141] libmachine: Using API Version  1
	I0730 00:44:32.977259  522355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:32.977625  522355 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:32.977831  522355 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:44:32.978039  522355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:44:32.978060  522355 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:44:32.980952  522355 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:32.981342  522355 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:32.981378  522355 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:32.981542  522355 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:44:32.981736  522355 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:44:32.981923  522355 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:44:32.982091  522355 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:44:33.063893  522355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:44:33.077288  522355 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-161305 -n ha-161305
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-161305 logs -n 25: (1.33904003s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305:/home/docker/cp-test_ha-161305-m03_ha-161305.txt                       |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305 sudo cat                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305.txt                                 |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m04 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp testdata/cp-test.txt                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305:/home/docker/cp-test_ha-161305-m04_ha-161305.txt                       |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305 sudo cat                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305.txt                                 |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03:/home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m03 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-161305 node stop m02 -v=7                                                     | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-161305 node start m02 -v=7                                                    | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:36:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:36:28.665664  516753 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:36:28.665890  516753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:36:28.665903  516753 out.go:304] Setting ErrFile to fd 2...
	I0730 00:36:28.665916  516753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:36:28.666443  516753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:36:28.667059  516753 out.go:298] Setting JSON to false
	I0730 00:36:28.668005  516753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8331,"bootTime":1722291458,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:36:28.668072  516753 start.go:139] virtualization: kvm guest
	I0730 00:36:28.670170  516753 out.go:177] * [ha-161305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:36:28.671509  516753 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:36:28.671514  516753 notify.go:220] Checking for updates...
	I0730 00:36:28.674276  516753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:36:28.675589  516753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:36:28.676888  516753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:36:28.678247  516753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:36:28.679713  516753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:36:28.681221  516753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:36:28.717149  516753 out.go:177] * Using the kvm2 driver based on user configuration
	I0730 00:36:28.718317  516753 start.go:297] selected driver: kvm2
	I0730 00:36:28.718336  516753 start.go:901] validating driver "kvm2" against <nil>
	I0730 00:36:28.718354  516753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:36:28.719473  516753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:36:28.719565  516753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:36:28.735693  516753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:36:28.735761  516753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 00:36:28.736094  516753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:36:28.736182  516753 cni.go:84] Creating CNI manager for ""
	I0730 00:36:28.736199  516753 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0730 00:36:28.736211  516753 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0730 00:36:28.736292  516753 start.go:340] cluster config:
	{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0730 00:36:28.736440  516753 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:36:28.738404  516753 out.go:177] * Starting "ha-161305" primary control-plane node in "ha-161305" cluster
	I0730 00:36:28.739904  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:36:28.739969  516753 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:36:28.739984  516753 cache.go:56] Caching tarball of preloaded images
	I0730 00:36:28.740079  516753 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:36:28.740094  516753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:36:28.741152  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:36:28.741200  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json: {Name:mk0edeef8de82386ac1fad0fbd86252925ee5418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:28.741403  516753 start.go:360] acquireMachinesLock for ha-161305: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:36:28.741439  516753 start.go:364] duration metric: took 20.343µs to acquireMachinesLock for "ha-161305"
	I0730 00:36:28.741459  516753 start.go:93] Provisioning new machine with config: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:36:28.741617  516753 start.go:125] createHost starting for "" (driver="kvm2")
	I0730 00:36:28.743370  516753 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 00:36:28.743572  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:36:28.743621  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:36:28.759060  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I0730 00:36:28.759468  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:36:28.760031  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:36:28.760059  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:36:28.760391  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:36:28.760580  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:28.760744  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:28.760894  516753 start.go:159] libmachine.API.Create for "ha-161305" (driver="kvm2")
	I0730 00:36:28.760920  516753 client.go:168] LocalClient.Create starting
	I0730 00:36:28.760974  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:36:28.761013  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:36:28.761032  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:36:28.761092  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:36:28.761119  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:36:28.761135  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:36:28.761265  516753 main.go:141] libmachine: Running pre-create checks...
	I0730 00:36:28.761292  516753 main.go:141] libmachine: (ha-161305) Calling .PreCreateCheck
	I0730 00:36:28.761634  516753 main.go:141] libmachine: (ha-161305) Calling .GetConfigRaw
	I0730 00:36:28.762027  516753 main.go:141] libmachine: Creating machine...
	I0730 00:36:28.762042  516753 main.go:141] libmachine: (ha-161305) Calling .Create
	I0730 00:36:28.762152  516753 main.go:141] libmachine: (ha-161305) Creating KVM machine...
	I0730 00:36:28.763494  516753 main.go:141] libmachine: (ha-161305) DBG | found existing default KVM network
	I0730 00:36:28.764231  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:28.764081  516776 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014870}
	I0730 00:36:28.764257  516753 main.go:141] libmachine: (ha-161305) DBG | created network xml: 
	I0730 00:36:28.764276  516753 main.go:141] libmachine: (ha-161305) DBG | <network>
	I0730 00:36:28.764288  516753 main.go:141] libmachine: (ha-161305) DBG |   <name>mk-ha-161305</name>
	I0730 00:36:28.764301  516753 main.go:141] libmachine: (ha-161305) DBG |   <dns enable='no'/>
	I0730 00:36:28.764312  516753 main.go:141] libmachine: (ha-161305) DBG |   
	I0730 00:36:28.764324  516753 main.go:141] libmachine: (ha-161305) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0730 00:36:28.764333  516753 main.go:141] libmachine: (ha-161305) DBG |     <dhcp>
	I0730 00:36:28.764340  516753 main.go:141] libmachine: (ha-161305) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0730 00:36:28.764347  516753 main.go:141] libmachine: (ha-161305) DBG |     </dhcp>
	I0730 00:36:28.764353  516753 main.go:141] libmachine: (ha-161305) DBG |   </ip>
	I0730 00:36:28.764359  516753 main.go:141] libmachine: (ha-161305) DBG |   
	I0730 00:36:28.764366  516753 main.go:141] libmachine: (ha-161305) DBG | </network>
	I0730 00:36:28.764373  516753 main.go:141] libmachine: (ha-161305) DBG | 
	I0730 00:36:28.769353  516753 main.go:141] libmachine: (ha-161305) DBG | trying to create private KVM network mk-ha-161305 192.168.39.0/24...
	I0730 00:36:28.840386  516753 main.go:141] libmachine: (ha-161305) DBG | private KVM network mk-ha-161305 192.168.39.0/24 created
	I0730 00:36:28.840425  516753 main.go:141] libmachine: (ha-161305) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305 ...
	I0730 00:36:28.840444  516753 main.go:141] libmachine: (ha-161305) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:36:28.840464  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:28.840417  516776 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:36:28.840589  516753 main.go:141] libmachine: (ha-161305) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:36:29.119872  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:29.119739  516776 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa...
	I0730 00:36:29.284121  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:29.283967  516776 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/ha-161305.rawdisk...
	I0730 00:36:29.284155  516753 main.go:141] libmachine: (ha-161305) DBG | Writing magic tar header
	I0730 00:36:29.284166  516753 main.go:141] libmachine: (ha-161305) DBG | Writing SSH key tar header
	I0730 00:36:29.284173  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:29.284111  516776 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305 ...
	I0730 00:36:29.284307  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305
	I0730 00:36:29.284350  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305 (perms=drwx------)
	I0730 00:36:29.284365  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:36:29.284379  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:36:29.284390  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:36:29.284399  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:36:29.284409  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:36:29.284424  516753 main.go:141] libmachine: (ha-161305) DBG | Checking permissions on dir: /home
	I0730 00:36:29.284442  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:36:29.284453  516753 main.go:141] libmachine: (ha-161305) DBG | Skipping /home - not owner
	I0730 00:36:29.284470  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:36:29.284483  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:36:29.284497  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:36:29.284508  516753 main.go:141] libmachine: (ha-161305) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:36:29.284520  516753 main.go:141] libmachine: (ha-161305) Creating domain...
	I0730 00:36:29.285579  516753 main.go:141] libmachine: (ha-161305) define libvirt domain using xml: 
	I0730 00:36:29.285610  516753 main.go:141] libmachine: (ha-161305) <domain type='kvm'>
	I0730 00:36:29.285621  516753 main.go:141] libmachine: (ha-161305)   <name>ha-161305</name>
	I0730 00:36:29.285629  516753 main.go:141] libmachine: (ha-161305)   <memory unit='MiB'>2200</memory>
	I0730 00:36:29.285641  516753 main.go:141] libmachine: (ha-161305)   <vcpu>2</vcpu>
	I0730 00:36:29.285652  516753 main.go:141] libmachine: (ha-161305)   <features>
	I0730 00:36:29.285663  516753 main.go:141] libmachine: (ha-161305)     <acpi/>
	I0730 00:36:29.285672  516753 main.go:141] libmachine: (ha-161305)     <apic/>
	I0730 00:36:29.285681  516753 main.go:141] libmachine: (ha-161305)     <pae/>
	I0730 00:36:29.285693  516753 main.go:141] libmachine: (ha-161305)     
	I0730 00:36:29.285701  516753 main.go:141] libmachine: (ha-161305)   </features>
	I0730 00:36:29.285710  516753 main.go:141] libmachine: (ha-161305)   <cpu mode='host-passthrough'>
	I0730 00:36:29.285720  516753 main.go:141] libmachine: (ha-161305)   
	I0730 00:36:29.285727  516753 main.go:141] libmachine: (ha-161305)   </cpu>
	I0730 00:36:29.285735  516753 main.go:141] libmachine: (ha-161305)   <os>
	I0730 00:36:29.285743  516753 main.go:141] libmachine: (ha-161305)     <type>hvm</type>
	I0730 00:36:29.285753  516753 main.go:141] libmachine: (ha-161305)     <boot dev='cdrom'/>
	I0730 00:36:29.285767  516753 main.go:141] libmachine: (ha-161305)     <boot dev='hd'/>
	I0730 00:36:29.285779  516753 main.go:141] libmachine: (ha-161305)     <bootmenu enable='no'/>
	I0730 00:36:29.285786  516753 main.go:141] libmachine: (ha-161305)   </os>
	I0730 00:36:29.285795  516753 main.go:141] libmachine: (ha-161305)   <devices>
	I0730 00:36:29.285804  516753 main.go:141] libmachine: (ha-161305)     <disk type='file' device='cdrom'>
	I0730 00:36:29.285817  516753 main.go:141] libmachine: (ha-161305)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/boot2docker.iso'/>
	I0730 00:36:29.285829  516753 main.go:141] libmachine: (ha-161305)       <target dev='hdc' bus='scsi'/>
	I0730 00:36:29.285851  516753 main.go:141] libmachine: (ha-161305)       <readonly/>
	I0730 00:36:29.285871  516753 main.go:141] libmachine: (ha-161305)     </disk>
	I0730 00:36:29.285899  516753 main.go:141] libmachine: (ha-161305)     <disk type='file' device='disk'>
	I0730 00:36:29.285920  516753 main.go:141] libmachine: (ha-161305)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:36:29.285934  516753 main.go:141] libmachine: (ha-161305)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/ha-161305.rawdisk'/>
	I0730 00:36:29.285939  516753 main.go:141] libmachine: (ha-161305)       <target dev='hda' bus='virtio'/>
	I0730 00:36:29.285945  516753 main.go:141] libmachine: (ha-161305)     </disk>
	I0730 00:36:29.285952  516753 main.go:141] libmachine: (ha-161305)     <interface type='network'>
	I0730 00:36:29.285958  516753 main.go:141] libmachine: (ha-161305)       <source network='mk-ha-161305'/>
	I0730 00:36:29.285962  516753 main.go:141] libmachine: (ha-161305)       <model type='virtio'/>
	I0730 00:36:29.285967  516753 main.go:141] libmachine: (ha-161305)     </interface>
	I0730 00:36:29.285971  516753 main.go:141] libmachine: (ha-161305)     <interface type='network'>
	I0730 00:36:29.285977  516753 main.go:141] libmachine: (ha-161305)       <source network='default'/>
	I0730 00:36:29.285981  516753 main.go:141] libmachine: (ha-161305)       <model type='virtio'/>
	I0730 00:36:29.285986  516753 main.go:141] libmachine: (ha-161305)     </interface>
	I0730 00:36:29.285990  516753 main.go:141] libmachine: (ha-161305)     <serial type='pty'>
	I0730 00:36:29.285995  516753 main.go:141] libmachine: (ha-161305)       <target port='0'/>
	I0730 00:36:29.286003  516753 main.go:141] libmachine: (ha-161305)     </serial>
	I0730 00:36:29.286008  516753 main.go:141] libmachine: (ha-161305)     <console type='pty'>
	I0730 00:36:29.286012  516753 main.go:141] libmachine: (ha-161305)       <target type='serial' port='0'/>
	I0730 00:36:29.286025  516753 main.go:141] libmachine: (ha-161305)     </console>
	I0730 00:36:29.286034  516753 main.go:141] libmachine: (ha-161305)     <rng model='virtio'>
	I0730 00:36:29.286040  516753 main.go:141] libmachine: (ha-161305)       <backend model='random'>/dev/random</backend>
	I0730 00:36:29.286049  516753 main.go:141] libmachine: (ha-161305)     </rng>
	I0730 00:36:29.286054  516753 main.go:141] libmachine: (ha-161305)     
	I0730 00:36:29.286060  516753 main.go:141] libmachine: (ha-161305)     
	I0730 00:36:29.286094  516753 main.go:141] libmachine: (ha-161305)   </devices>
	I0730 00:36:29.286116  516753 main.go:141] libmachine: (ha-161305) </domain>
	I0730 00:36:29.286131  516753 main.go:141] libmachine: (ha-161305) 
	I0730 00:36:29.290560  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:48:5f:80 in network default
	I0730 00:36:29.291121  516753 main.go:141] libmachine: (ha-161305) Ensuring networks are active...
	I0730 00:36:29.291136  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:29.291772  516753 main.go:141] libmachine: (ha-161305) Ensuring network default is active
	I0730 00:36:29.292087  516753 main.go:141] libmachine: (ha-161305) Ensuring network mk-ha-161305 is active
	I0730 00:36:29.292564  516753 main.go:141] libmachine: (ha-161305) Getting domain xml...
	I0730 00:36:29.293265  516753 main.go:141] libmachine: (ha-161305) Creating domain...
	I0730 00:36:30.485952  516753 main.go:141] libmachine: (ha-161305) Waiting to get IP...
	I0730 00:36:30.486728  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:30.487172  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:30.487213  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:30.487165  516776 retry.go:31] will retry after 239.783115ms: waiting for machine to come up
	I0730 00:36:30.728669  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:30.729085  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:30.729112  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:30.729046  516776 retry.go:31] will retry after 334.71581ms: waiting for machine to come up
	I0730 00:36:31.065673  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:31.066051  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:31.066088  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:31.066028  516776 retry.go:31] will retry after 442.95444ms: waiting for machine to come up
	I0730 00:36:31.510831  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:31.511275  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:31.511298  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:31.511253  516776 retry.go:31] will retry after 609.120594ms: waiting for machine to come up
	I0730 00:36:32.121947  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:32.122399  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:32.122429  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:32.122317  516776 retry.go:31] will retry after 627.70006ms: waiting for machine to come up
	I0730 00:36:32.751197  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:32.751641  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:32.751693  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:32.751622  516776 retry.go:31] will retry after 574.420516ms: waiting for machine to come up
	I0730 00:36:33.327441  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:33.327861  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:33.327901  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:33.327809  516776 retry.go:31] will retry after 830.453811ms: waiting for machine to come up
	I0730 00:36:34.159438  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:34.159812  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:34.159836  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:34.159774  516776 retry.go:31] will retry after 954.381064ms: waiting for machine to come up
	I0730 00:36:35.116062  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:35.116448  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:35.116478  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:35.116404  516776 retry.go:31] will retry after 1.732818187s: waiting for machine to come up
	I0730 00:36:36.851343  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:36.851780  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:36.851811  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:36.851730  516776 retry.go:31] will retry after 1.834904059s: waiting for machine to come up
	I0730 00:36:38.688038  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:38.688585  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:38.688618  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:38.688530  516776 retry.go:31] will retry after 2.495048845s: waiting for machine to come up
	I0730 00:36:41.184694  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:41.185264  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:41.185289  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:41.185205  516776 retry.go:31] will retry after 2.40860982s: waiting for machine to come up
	I0730 00:36:43.596830  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:43.597316  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find current IP address of domain ha-161305 in network mk-ha-161305
	I0730 00:36:43.597343  516753 main.go:141] libmachine: (ha-161305) DBG | I0730 00:36:43.597271  516776 retry.go:31] will retry after 3.976089322s: waiting for machine to come up
	I0730 00:36:47.577942  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:47.578387  516753 main.go:141] libmachine: (ha-161305) Found IP for machine: 192.168.39.80
	I0730 00:36:47.578413  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has current primary IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:47.578426  516753 main.go:141] libmachine: (ha-161305) Reserving static IP address...
	I0730 00:36:47.578729  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find host DHCP lease matching {name: "ha-161305", mac: "52:54:00:11:58:6f", ip: "192.168.39.80"} in network mk-ha-161305
	I0730 00:36:47.651095  516753 main.go:141] libmachine: (ha-161305) DBG | Getting to WaitForSSH function...
	I0730 00:36:47.651129  516753 main.go:141] libmachine: (ha-161305) Reserved static IP address: 192.168.39.80
	I0730 00:36:47.651141  516753 main.go:141] libmachine: (ha-161305) Waiting for SSH to be available...
	I0730 00:36:47.653320  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:47.653624  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305
	I0730 00:36:47.653658  516753 main.go:141] libmachine: (ha-161305) DBG | unable to find defined IP address of network mk-ha-161305 interface with MAC address 52:54:00:11:58:6f
	I0730 00:36:47.653813  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH client type: external
	I0730 00:36:47.653857  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa (-rw-------)
	I0730 00:36:47.653897  516753 main.go:141] libmachine: (ha-161305) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:36:47.653916  516753 main.go:141] libmachine: (ha-161305) DBG | About to run SSH command:
	I0730 00:36:47.653932  516753 main.go:141] libmachine: (ha-161305) DBG | exit 0
	I0730 00:36:47.657769  516753 main.go:141] libmachine: (ha-161305) DBG | SSH cmd err, output: exit status 255: 
	I0730 00:36:47.657788  516753 main.go:141] libmachine: (ha-161305) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0730 00:36:47.657795  516753 main.go:141] libmachine: (ha-161305) DBG | command : exit 0
	I0730 00:36:47.657800  516753 main.go:141] libmachine: (ha-161305) DBG | err     : exit status 255
	I0730 00:36:47.657806  516753 main.go:141] libmachine: (ha-161305) DBG | output  : 
	I0730 00:36:50.658959  516753 main.go:141] libmachine: (ha-161305) DBG | Getting to WaitForSSH function...
	I0730 00:36:50.661233  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.661552  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:50.661578  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.661661  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH client type: external
	I0730 00:36:50.661684  516753 main.go:141] libmachine: (ha-161305) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa (-rw-------)
	I0730 00:36:50.661704  516753 main.go:141] libmachine: (ha-161305) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:36:50.661717  516753 main.go:141] libmachine: (ha-161305) DBG | About to run SSH command:
	I0730 00:36:50.661726  516753 main.go:141] libmachine: (ha-161305) DBG | exit 0
	I0730 00:36:50.788532  516753 main.go:141] libmachine: (ha-161305) DBG | SSH cmd err, output: <nil>: 
	I0730 00:36:50.788857  516753 main.go:141] libmachine: (ha-161305) KVM machine creation complete!
	I0730 00:36:50.789193  516753 main.go:141] libmachine: (ha-161305) Calling .GetConfigRaw
	I0730 00:36:50.789777  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:50.789988  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:50.790144  516753 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:36:50.790161  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:36:50.791244  516753 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:36:50.791258  516753 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:36:50.791263  516753 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:36:50.791268  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:50.793663  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.794007  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:50.794027  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.794165  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:50.794342  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.794507  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.794664  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:50.794836  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:50.795128  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:50.795144  516753 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:36:50.904092  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:36:50.904121  516753 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:36:50.904134  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:50.906802  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.907194  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:50.907225  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:50.907374  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:50.907633  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.907794  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:50.907942  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:50.908184  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:50.908436  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:50.908450  516753 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:36:51.021254  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:36:51.021349  516753 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:36:51.021364  516753 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:36:51.021380  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:51.021661  516753 buildroot.go:166] provisioning hostname "ha-161305"
	I0730 00:36:51.021694  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:51.021868  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.024286  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.024603  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.024629  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.024726  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.024898  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.025041  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.025219  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.025381  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:51.025570  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:51.025585  516753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305 && echo "ha-161305" | sudo tee /etc/hostname
	I0730 00:36:51.149628  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:36:51.149675  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.152336  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.152651  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.152673  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.152955  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.153209  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.153388  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.153535  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.153679  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:51.153894  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:51.153918  516753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:36:51.272933  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:36:51.272971  516753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:36:51.273004  516753 buildroot.go:174] setting up certificates
	I0730 00:36:51.273041  516753 provision.go:84] configureAuth start
	I0730 00:36:51.273063  516753 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:36:51.273376  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:51.276188  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.276543  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.276572  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.276731  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.278888  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.279207  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.279234  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.279332  516753 provision.go:143] copyHostCerts
	I0730 00:36:51.279368  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:36:51.279420  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:36:51.279439  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:36:51.279508  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:36:51.279633  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:36:51.279656  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:36:51.279664  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:36:51.279692  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:36:51.279737  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:36:51.279753  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:36:51.279759  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:36:51.279780  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:36:51.279828  516753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305 san=[127.0.0.1 192.168.39.80 ha-161305 localhost minikube]
	I0730 00:36:51.487281  516753 provision.go:177] copyRemoteCerts
	I0730 00:36:51.487351  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:36:51.487378  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.490053  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.490403  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.490433  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.490564  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.490767  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.490939  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.491079  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:51.574497  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:36:51.574583  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0730 00:36:51.596184  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:36:51.596261  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:36:51.617691  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:36:51.617771  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:36:51.638785  516753 provision.go:87] duration metric: took 365.724901ms to configureAuth
	I0730 00:36:51.638814  516753 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:36:51.638988  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:36:51.639061  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.641680  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.641975  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.641998  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.642137  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.642374  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.642561  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.642745  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.642912  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:51.643137  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:51.643156  516753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:36:51.909063  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:36:51.909102  516753 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:36:51.909113  516753 main.go:141] libmachine: (ha-161305) Calling .GetURL
	I0730 00:36:51.910422  516753 main.go:141] libmachine: (ha-161305) DBG | Using libvirt version 6000000
	I0730 00:36:51.912944  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.913304  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.913331  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.913520  516753 main.go:141] libmachine: Docker is up and running!
	I0730 00:36:51.913547  516753 main.go:141] libmachine: Reticulating splines...
	I0730 00:36:51.913560  516753 client.go:171] duration metric: took 23.152629816s to LocalClient.Create
	I0730 00:36:51.913590  516753 start.go:167] duration metric: took 23.152697956s to libmachine.API.Create "ha-161305"
	I0730 00:36:51.913602  516753 start.go:293] postStartSetup for "ha-161305" (driver="kvm2")
	I0730 00:36:51.913616  516753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:36:51.913639  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:51.913876  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:36:51.913901  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:51.915857  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.916183  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:51.916209  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:51.916342  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:51.916522  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:51.916733  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:51.916868  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:52.003019  516753 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:36:52.007144  516753 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:36:52.007172  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:36:52.007251  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:36:52.007361  516753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:36:52.007376  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:36:52.007499  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:36:52.016416  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:36:52.040876  516753 start.go:296] duration metric: took 127.258508ms for postStartSetup
	I0730 00:36:52.040938  516753 main.go:141] libmachine: (ha-161305) Calling .GetConfigRaw
	I0730 00:36:52.041604  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:52.043938  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.044291  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.044334  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.044578  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:36:52.044782  516753 start.go:128] duration metric: took 23.303148719s to createHost
	I0730 00:36:52.044807  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:52.047035  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.047331  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.047354  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.047494  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:52.047702  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.047910  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.048082  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:52.048243  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:36:52.048418  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:36:52.048428  516753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:36:52.157068  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722299812.133921114
	
	I0730 00:36:52.157091  516753 fix.go:216] guest clock: 1722299812.133921114
	I0730 00:36:52.157099  516753 fix.go:229] Guest: 2024-07-30 00:36:52.133921114 +0000 UTC Remote: 2024-07-30 00:36:52.044794617 +0000 UTC m=+23.414617294 (delta=89.126497ms)
	I0730 00:36:52.157138  516753 fix.go:200] guest clock delta is within tolerance: 89.126497ms
	I0730 00:36:52.157145  516753 start.go:83] releasing machines lock for "ha-161305", held for 23.415698873s
	I0730 00:36:52.157166  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.157441  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:52.159934  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.160295  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.160321  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.160463  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.160968  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.161121  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:36:52.161200  516753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:36:52.161249  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:52.161362  516753 ssh_runner.go:195] Run: cat /version.json
	I0730 00:36:52.161390  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:36:52.163860  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164135  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164179  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.164201  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164371  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:52.164542  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.164585  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:52.164609  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:52.164727  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:52.164786  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:36:52.164867  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:52.164987  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:36:52.165132  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:36:52.165321  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:36:52.276182  516753 ssh_runner.go:195] Run: systemctl --version
	I0730 00:36:52.281852  516753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:36:52.439457  516753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:36:52.444741  516753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:36:52.444803  516753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:36:52.460399  516753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:36:52.460430  516753 start.go:495] detecting cgroup driver to use...
	I0730 00:36:52.460514  516753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:36:52.475665  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:36:52.488459  516753 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:36:52.488535  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:36:52.501535  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:36:52.514467  516753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:36:52.627090  516753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:36:52.788767  516753 docker.go:233] disabling docker service ...
	I0730 00:36:52.788852  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:36:52.802434  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:36:52.814436  516753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:36:52.921251  516753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:36:53.028623  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:36:53.042213  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:36:53.060248  516753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:36:53.060320  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.070414  516753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:36:53.070477  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.080480  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.090281  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.100034  516753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:36:53.109808  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.119641  516753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.135491  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:36:53.145379  516753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:36:53.154207  516753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:36:53.154262  516753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:36:53.166031  516753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:36:53.175065  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:36:53.290658  516753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:36:53.423478  516753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:36:53.423568  516753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:36:53.428094  516753 start.go:563] Will wait 60s for crictl version
	I0730 00:36:53.428157  516753 ssh_runner.go:195] Run: which crictl
	I0730 00:36:53.431658  516753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:36:53.465361  516753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:36:53.465460  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:36:53.492262  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:36:53.526188  516753 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:36:53.527332  516753 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:36:53.530247  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:53.530612  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:36:53.530634  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:36:53.530930  516753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:36:53.534585  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:36:53.546420  516753 kubeadm.go:883] updating cluster {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:36:53.546534  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:36:53.546588  516753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:36:53.577859  516753 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0730 00:36:53.577943  516753 ssh_runner.go:195] Run: which lz4
	I0730 00:36:53.581468  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0730 00:36:53.581568  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0730 00:36:53.585294  516753 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0730 00:36:53.585326  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0730 00:36:54.816391  516753 crio.go:462] duration metric: took 1.234848456s to copy over tarball
	I0730 00:36:54.816475  516753 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0730 00:36:56.911570  516753 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.095054945s)
	I0730 00:36:56.911599  516753 crio.go:469] duration metric: took 2.095181748s to extract the tarball
	I0730 00:36:56.911608  516753 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0730 00:36:56.948772  516753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:36:56.992406  516753 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:36:56.992435  516753 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:36:56.992445  516753 kubeadm.go:934] updating node { 192.168.39.80 8443 v1.30.3 crio true true} ...
	I0730 00:36:56.992565  516753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:36:56.992637  516753 ssh_runner.go:195] Run: crio config
	I0730 00:36:57.041933  516753 cni.go:84] Creating CNI manager for ""
	I0730 00:36:57.041951  516753 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0730 00:36:57.041961  516753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:36:57.041989  516753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-161305 NodeName:ha-161305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:36:57.042155  516753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-161305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:36:57.042192  516753 kube-vip.go:115] generating kube-vip config ...
	I0730 00:36:57.042237  516753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:36:57.059814  516753 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:36:57.059952  516753 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:36:57.060023  516753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:36:57.070656  516753 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:36:57.070744  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0730 00:36:57.079149  516753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0730 00:36:57.094014  516753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:36:57.108514  516753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0730 00:36:57.123209  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0730 00:36:57.138028  516753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:36:57.141570  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:36:57.152390  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:36:57.258370  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:36:57.274613  516753 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.80
	I0730 00:36:57.274643  516753 certs.go:194] generating shared ca certs ...
	I0730 00:36:57.274667  516753 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.274869  516753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:36:57.274934  516753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:36:57.274947  516753 certs.go:256] generating profile certs ...
	I0730 00:36:57.275035  516753 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:36:57.275054  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt with IP's: []
	I0730 00:36:57.389571  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt ...
	I0730 00:36:57.389613  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt: {Name:mk843da8ae9ed625b23bd908faf33ddb4ca461d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.389868  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key ...
	I0730 00:36:57.389891  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key: {Name:mk274e912f3472d2666bb12e5007c3c4813bd0a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.390021  516753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c
	I0730 00:36:57.390045  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.254]
	I0730 00:36:57.498383  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c ...
	I0730 00:36:57.498417  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c: {Name:mk2a527a45349e6fa9ab7deb641f7395792f53c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.498583  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c ...
	I0730 00:36:57.498595  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c: {Name:mk05d6758edb948cdfd9957e0f080b273a5f0228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.498665  516753 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.057b504c -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:36:57.498735  516753 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.057b504c -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:36:57.498788  516753 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:36:57.498802  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt with IP's: []
	I0730 00:36:57.601866  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt ...
	I0730 00:36:57.601898  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt: {Name:mk095a9a459cefeb454917fa27f54c463b594076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.602058  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key ...
	I0730 00:36:57.602068  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key: {Name:mkd155d484a412cbbfe26d3a22d9b60af6c16e24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:36:57.602133  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:36:57.602150  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:36:57.602161  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:36:57.602174  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:36:57.602187  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:36:57.602199  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:36:57.602211  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:36:57.602223  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:36:57.602277  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:36:57.602310  516753 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:36:57.602317  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:36:57.602342  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:36:57.602362  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:36:57.602383  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:36:57.602419  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:36:57.602444  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.602458  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.602472  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.602981  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:36:57.626494  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:36:57.647705  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:36:57.669245  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:36:57.691116  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0730 00:36:57.712595  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:36:57.734074  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:36:57.758402  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:36:57.782242  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:36:57.804174  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:36:57.826153  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:36:57.847907  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:36:57.863640  516753 ssh_runner.go:195] Run: openssl version
	I0730 00:36:57.868944  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:36:57.878813  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.882829  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.882898  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:36:57.888227  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:36:57.897990  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:36:57.907698  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.911693  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.911741  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:36:57.917271  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:36:57.927334  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:36:57.937303  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.941240  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.941297  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:36:57.946456  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:36:57.956332  516753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:36:57.959973  516753 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:36:57.960030  516753 kubeadm.go:392] StartCluster: {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:36:57.960102  516753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:36:57.960143  516753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:36:57.994408  516753 cri.go:89] found id: ""
	I0730 00:36:57.994476  516753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 00:36:58.003813  516753 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 00:36:58.012921  516753 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 00:36:58.024757  516753 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 00:36:58.024780  516753 kubeadm.go:157] found existing configuration files:
	
	I0730 00:36:58.024825  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 00:36:58.036125  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 00:36:58.036202  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 00:36:58.048562  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 00:36:58.061788  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 00:36:58.061844  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 00:36:58.074477  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 00:36:58.084595  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 00:36:58.084662  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 00:36:58.097064  516753 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 00:36:58.105879  516753 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 00:36:58.105934  516753 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 00:36:58.114897  516753 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0730 00:36:58.218700  516753 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0730 00:36:58.218764  516753 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 00:36:58.335153  516753 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 00:36:58.335290  516753 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 00:36:58.335438  516753 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 00:36:58.526324  516753 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 00:36:58.528609  516753 out.go:204]   - Generating certificates and keys ...
	I0730 00:36:58.528725  516753 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 00:36:58.528797  516753 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 00:36:58.612166  516753 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 00:36:58.888864  516753 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 00:36:59.081294  516753 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 00:36:59.174030  516753 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 00:36:59.254970  516753 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 00:36:59.255271  516753 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-161305 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I0730 00:36:59.391004  516753 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 00:36:59.391352  516753 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-161305 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I0730 00:36:59.467999  516753 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 00:36:59.584232  516753 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 00:37:00.068580  516753 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 00:37:00.068665  516753 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 00:37:00.222101  516753 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 00:37:00.294638  516753 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0730 00:37:00.673109  516753 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 00:37:00.790780  516753 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 00:37:01.027593  516753 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 00:37:01.027998  516753 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 00:37:01.030626  516753 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 00:37:01.032462  516753 out.go:204]   - Booting up control plane ...
	I0730 00:37:01.032586  516753 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 00:37:01.032737  516753 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 00:37:01.032857  516753 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 00:37:01.047285  516753 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 00:37:01.050405  516753 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 00:37:01.050477  516753 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 00:37:01.181508  516753 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0730 00:37:01.181612  516753 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0730 00:37:01.682528  516753 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.374331ms
	I0730 00:37:01.682641  516753 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0730 00:37:07.646757  516753 kubeadm.go:310] [api-check] The API server is healthy after 5.96759416s
	I0730 00:37:07.659136  516753 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0730 00:37:07.675124  516753 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0730 00:37:07.702355  516753 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0730 00:37:07.702616  516753 kubeadm.go:310] [mark-control-plane] Marking the node ha-161305 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0730 00:37:07.718852  516753 kubeadm.go:310] [bootstrap-token] Using token: r6ju3c.hq3k4ysj5ca33xmr
	I0730 00:37:07.720572  516753 out.go:204]   - Configuring RBAC rules ...
	I0730 00:37:07.720767  516753 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0730 00:37:07.728868  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0730 00:37:07.744401  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0730 00:37:07.749030  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0730 00:37:07.752564  516753 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0730 00:37:07.756175  516753 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0730 00:37:08.054029  516753 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0730 00:37:08.493513  516753 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0730 00:37:09.054914  516753 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0730 00:37:09.055973  516753 kubeadm.go:310] 
	I0730 00:37:09.056050  516753 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0730 00:37:09.056058  516753 kubeadm.go:310] 
	I0730 00:37:09.056159  516753 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0730 00:37:09.056179  516753 kubeadm.go:310] 
	I0730 00:37:09.056209  516753 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0730 00:37:09.056349  516753 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0730 00:37:09.056416  516753 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0730 00:37:09.056428  516753 kubeadm.go:310] 
	I0730 00:37:09.056481  516753 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0730 00:37:09.056489  516753 kubeadm.go:310] 
	I0730 00:37:09.056544  516753 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0730 00:37:09.056570  516753 kubeadm.go:310] 
	I0730 00:37:09.056656  516753 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0730 00:37:09.056768  516753 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0730 00:37:09.056866  516753 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0730 00:37:09.056877  516753 kubeadm.go:310] 
	I0730 00:37:09.056982  516753 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0730 00:37:09.057097  516753 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0730 00:37:09.057107  516753 kubeadm.go:310] 
	I0730 00:37:09.057215  516753 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r6ju3c.hq3k4ysj5ca33xmr \
	I0730 00:37:09.057374  516753 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 \
	I0730 00:37:09.057415  516753 kubeadm.go:310] 	--control-plane 
	I0730 00:37:09.057423  516753 kubeadm.go:310] 
	I0730 00:37:09.057534  516753 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0730 00:37:09.057553  516753 kubeadm.go:310] 
	I0730 00:37:09.057674  516753 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r6ju3c.hq3k4ysj5ca33xmr \
	I0730 00:37:09.057816  516753 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 
	I0730 00:37:09.058064  516753 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 00:37:09.058085  516753 cni.go:84] Creating CNI manager for ""
	I0730 00:37:09.058093  516753 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0730 00:37:09.060438  516753 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0730 00:37:09.061718  516753 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0730 00:37:09.066780  516753 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0730 00:37:09.066799  516753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0730 00:37:09.086768  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0730 00:37:09.490993  516753 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0730 00:37:09.491069  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:09.491098  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-161305 minikube.k8s.io/updated_at=2024_07_30T00_37_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=ha-161305 minikube.k8s.io/primary=true
	I0730 00:37:09.628964  516753 ops.go:34] apiserver oom_adj: -16
	I0730 00:37:09.646846  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:10.147120  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:10.647387  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:11.147045  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:11.647233  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:12.146973  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:12.647507  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:13.147490  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:13.647655  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:14.147826  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:14.647712  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:15.147373  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:15.647863  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:16.147195  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:16.646934  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:17.146898  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:17.647144  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:18.147654  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:18.647934  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:19.146949  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:19.646894  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:20.147326  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:20.647256  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:21.147642  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:21.647509  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 00:37:21.733926  516753 kubeadm.go:1113] duration metric: took 12.24291342s to wait for elevateKubeSystemPrivileges
	I0730 00:37:21.733960  516753 kubeadm.go:394] duration metric: took 23.773935661s to StartCluster
	I0730 00:37:21.733985  516753 settings.go:142] acquiring lock: {Name:mk89b2537c1ec20302d90ab73b167422bb363b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:21.734072  516753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:37:21.734927  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/kubeconfig: {Name:mk6ecf4e5b07b810f1fa2b9790857d7586f0cf41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:21.735193  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0730 00:37:21.735204  516753 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:37:21.735232  516753 start.go:241] waiting for startup goroutines ...
	I0730 00:37:21.735242  516753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0730 00:37:21.735312  516753 addons.go:69] Setting storage-provisioner=true in profile "ha-161305"
	I0730 00:37:21.735329  516753 addons.go:69] Setting default-storageclass=true in profile "ha-161305"
	I0730 00:37:21.735344  516753 addons.go:234] Setting addon storage-provisioner=true in "ha-161305"
	I0730 00:37:21.735357  516753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-161305"
	I0730 00:37:21.735397  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:37:21.735425  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:21.735742  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.735774  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.735808  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.735849  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.750956  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0730 00:37:21.751378  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.751898  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.751921  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.752275  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.752477  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:21.754660  516753 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:37:21.754899  516753 kapi.go:59] client config for ha-161305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0730 00:37:21.755364  516753 cert_rotation.go:137] Starting client certificate rotation controller
	I0730 00:37:21.755557  516753 addons.go:234] Setting addon default-storageclass=true in "ha-161305"
	I0730 00:37:21.755594  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:37:21.755877  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.755920  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.756851  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I0730 00:37:21.757395  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.757968  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.757993  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.758361  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.758864  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.758917  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.771946  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0730 00:37:21.772486  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.773013  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.773040  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.773418  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.773972  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:21.774004  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:21.774124  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43581
	I0730 00:37:21.774491  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.774929  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.774950  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.775248  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.775440  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:21.777325  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:37:21.779280  516753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 00:37:21.780628  516753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:37:21.780644  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0730 00:37:21.780658  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:37:21.783472  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.783953  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:37:21.783987  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.784135  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:37:21.784291  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:37:21.784443  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:37:21.784649  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:37:21.795042  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37343
	I0730 00:37:21.795491  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:21.796046  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:21.796075  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:21.796438  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:21.796688  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:21.798525  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:37:21.798763  516753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0730 00:37:21.798782  516753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0730 00:37:21.798803  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:37:21.801238  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.801697  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:37:21.801725  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:21.801908  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:37:21.802086  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:37:21.802251  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:37:21.802411  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:37:21.899791  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0730 00:37:22.006706  516753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 00:37:22.058875  516753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0730 00:37:22.297829  516753 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0730 00:37:22.682018  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682048  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682119  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682145  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682352  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682369  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.682385  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682393  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682454  516753 main.go:141] libmachine: (ha-161305) DBG | Closing plugin on server side
	I0730 00:37:22.682500  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682521  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.682532  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.682543  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.682636  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682652  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.682652  516753 main.go:141] libmachine: (ha-161305) DBG | Closing plugin on server side
	I0730 00:37:22.682831  516753 main.go:141] libmachine: (ha-161305) DBG | Closing plugin on server side
	I0730 00:37:22.682901  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.682919  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.683079  516753 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0730 00:37:22.683087  516753 round_trippers.go:469] Request Headers:
	I0730 00:37:22.683097  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:37:22.683102  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:37:22.696319  516753 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0730 00:37:22.696931  516753 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0730 00:37:22.696945  516753 round_trippers.go:469] Request Headers:
	I0730 00:37:22.696953  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:37:22.696957  516753 round_trippers.go:473]     Content-Type: application/json
	I0730 00:37:22.696961  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:37:22.699806  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:37:22.699972  516753 main.go:141] libmachine: Making call to close driver server
	I0730 00:37:22.699984  516753 main.go:141] libmachine: (ha-161305) Calling .Close
	I0730 00:37:22.700328  516753 main.go:141] libmachine: Successfully made call to close driver server
	I0730 00:37:22.700356  516753 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 00:37:22.702109  516753 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0730 00:37:22.703311  516753 addons.go:510] duration metric: took 968.066182ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0730 00:37:22.703359  516753 start.go:246] waiting for cluster config update ...
	I0730 00:37:22.703379  516753 start.go:255] writing updated cluster config ...
	I0730 00:37:22.704828  516753 out.go:177] 
	I0730 00:37:22.706225  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:22.706298  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:37:22.707934  516753 out.go:177] * Starting "ha-161305-m02" control-plane node in "ha-161305" cluster
	I0730 00:37:22.709138  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:37:22.709166  516753 cache.go:56] Caching tarball of preloaded images
	I0730 00:37:22.709259  516753 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:37:22.709274  516753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:37:22.709362  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:37:22.709515  516753 start.go:360] acquireMachinesLock for ha-161305-m02: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:37:22.709562  516753 start.go:364] duration metric: took 25.739µs to acquireMachinesLock for "ha-161305-m02"
	I0730 00:37:22.709586  516753 start.go:93] Provisioning new machine with config: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:37:22.709656  516753 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0730 00:37:22.711233  516753 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 00:37:22.711332  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:22.711357  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:22.728619  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0730 00:37:22.729175  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:22.729796  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:22.729820  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:22.730213  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:22.730428  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:22.730581  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:22.730803  516753 start.go:159] libmachine.API.Create for "ha-161305" (driver="kvm2")
	I0730 00:37:22.730837  516753 client.go:168] LocalClient.Create starting
	I0730 00:37:22.730877  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:37:22.730919  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:37:22.730941  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:37:22.731011  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:37:22.731039  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:37:22.731063  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:37:22.731088  516753 main.go:141] libmachine: Running pre-create checks...
	I0730 00:37:22.731101  516753 main.go:141] libmachine: (ha-161305-m02) Calling .PreCreateCheck
	I0730 00:37:22.731285  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetConfigRaw
	I0730 00:37:22.731690  516753 main.go:141] libmachine: Creating machine...
	I0730 00:37:22.731706  516753 main.go:141] libmachine: (ha-161305-m02) Calling .Create
	I0730 00:37:22.731832  516753 main.go:141] libmachine: (ha-161305-m02) Creating KVM machine...
	I0730 00:37:22.732984  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found existing default KVM network
	I0730 00:37:22.733134  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found existing private KVM network mk-ha-161305
	I0730 00:37:22.733295  516753 main.go:141] libmachine: (ha-161305-m02) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02 ...
	I0730 00:37:22.733321  516753 main.go:141] libmachine: (ha-161305-m02) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:37:22.733391  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:22.733273  517154 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:37:22.733488  516753 main.go:141] libmachine: (ha-161305-m02) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:37:23.012758  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:23.012585  517154 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa...
	I0730 00:37:23.495090  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:23.494941  517154 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/ha-161305-m02.rawdisk...
	I0730 00:37:23.495124  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Writing magic tar header
	I0730 00:37:23.495140  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Writing SSH key tar header
	I0730 00:37:23.495148  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:23.495060  517154 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02 ...
	I0730 00:37:23.495160  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02
	I0730 00:37:23.495242  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02 (perms=drwx------)
	I0730 00:37:23.495265  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:37:23.495273  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:37:23.495282  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:37:23.495291  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:37:23.495300  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:37:23.495316  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:37:23.495323  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:37:23.495330  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:37:23.495339  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:37:23.495346  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Checking permissions on dir: /home
	I0730 00:37:23.495354  516753 main.go:141] libmachine: (ha-161305-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:37:23.495362  516753 main.go:141] libmachine: (ha-161305-m02) Creating domain...
	I0730 00:37:23.495372  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Skipping /home - not owner
	I0730 00:37:23.496352  516753 main.go:141] libmachine: (ha-161305-m02) define libvirt domain using xml: 
	I0730 00:37:23.496371  516753 main.go:141] libmachine: (ha-161305-m02) <domain type='kvm'>
	I0730 00:37:23.496378  516753 main.go:141] libmachine: (ha-161305-m02)   <name>ha-161305-m02</name>
	I0730 00:37:23.496384  516753 main.go:141] libmachine: (ha-161305-m02)   <memory unit='MiB'>2200</memory>
	I0730 00:37:23.496392  516753 main.go:141] libmachine: (ha-161305-m02)   <vcpu>2</vcpu>
	I0730 00:37:23.496398  516753 main.go:141] libmachine: (ha-161305-m02)   <features>
	I0730 00:37:23.496408  516753 main.go:141] libmachine: (ha-161305-m02)     <acpi/>
	I0730 00:37:23.496416  516753 main.go:141] libmachine: (ha-161305-m02)     <apic/>
	I0730 00:37:23.496422  516753 main.go:141] libmachine: (ha-161305-m02)     <pae/>
	I0730 00:37:23.496427  516753 main.go:141] libmachine: (ha-161305-m02)     
	I0730 00:37:23.496432  516753 main.go:141] libmachine: (ha-161305-m02)   </features>
	I0730 00:37:23.496440  516753 main.go:141] libmachine: (ha-161305-m02)   <cpu mode='host-passthrough'>
	I0730 00:37:23.496445  516753 main.go:141] libmachine: (ha-161305-m02)   
	I0730 00:37:23.496450  516753 main.go:141] libmachine: (ha-161305-m02)   </cpu>
	I0730 00:37:23.496470  516753 main.go:141] libmachine: (ha-161305-m02)   <os>
	I0730 00:37:23.496492  516753 main.go:141] libmachine: (ha-161305-m02)     <type>hvm</type>
	I0730 00:37:23.496503  516753 main.go:141] libmachine: (ha-161305-m02)     <boot dev='cdrom'/>
	I0730 00:37:23.496519  516753 main.go:141] libmachine: (ha-161305-m02)     <boot dev='hd'/>
	I0730 00:37:23.496534  516753 main.go:141] libmachine: (ha-161305-m02)     <bootmenu enable='no'/>
	I0730 00:37:23.496550  516753 main.go:141] libmachine: (ha-161305-m02)   </os>
	I0730 00:37:23.496558  516753 main.go:141] libmachine: (ha-161305-m02)   <devices>
	I0730 00:37:23.496563  516753 main.go:141] libmachine: (ha-161305-m02)     <disk type='file' device='cdrom'>
	I0730 00:37:23.496572  516753 main.go:141] libmachine: (ha-161305-m02)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/boot2docker.iso'/>
	I0730 00:37:23.496577  516753 main.go:141] libmachine: (ha-161305-m02)       <target dev='hdc' bus='scsi'/>
	I0730 00:37:23.496584  516753 main.go:141] libmachine: (ha-161305-m02)       <readonly/>
	I0730 00:37:23.496591  516753 main.go:141] libmachine: (ha-161305-m02)     </disk>
	I0730 00:37:23.496597  516753 main.go:141] libmachine: (ha-161305-m02)     <disk type='file' device='disk'>
	I0730 00:37:23.496606  516753 main.go:141] libmachine: (ha-161305-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:37:23.496614  516753 main.go:141] libmachine: (ha-161305-m02)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/ha-161305-m02.rawdisk'/>
	I0730 00:37:23.496621  516753 main.go:141] libmachine: (ha-161305-m02)       <target dev='hda' bus='virtio'/>
	I0730 00:37:23.496627  516753 main.go:141] libmachine: (ha-161305-m02)     </disk>
	I0730 00:37:23.496636  516753 main.go:141] libmachine: (ha-161305-m02)     <interface type='network'>
	I0730 00:37:23.496665  516753 main.go:141] libmachine: (ha-161305-m02)       <source network='mk-ha-161305'/>
	I0730 00:37:23.496685  516753 main.go:141] libmachine: (ha-161305-m02)       <model type='virtio'/>
	I0730 00:37:23.496697  516753 main.go:141] libmachine: (ha-161305-m02)     </interface>
	I0730 00:37:23.496717  516753 main.go:141] libmachine: (ha-161305-m02)     <interface type='network'>
	I0730 00:37:23.496726  516753 main.go:141] libmachine: (ha-161305-m02)       <source network='default'/>
	I0730 00:37:23.496739  516753 main.go:141] libmachine: (ha-161305-m02)       <model type='virtio'/>
	I0730 00:37:23.496754  516753 main.go:141] libmachine: (ha-161305-m02)     </interface>
	I0730 00:37:23.496770  516753 main.go:141] libmachine: (ha-161305-m02)     <serial type='pty'>
	I0730 00:37:23.496785  516753 main.go:141] libmachine: (ha-161305-m02)       <target port='0'/>
	I0730 00:37:23.496796  516753 main.go:141] libmachine: (ha-161305-m02)     </serial>
	I0730 00:37:23.496803  516753 main.go:141] libmachine: (ha-161305-m02)     <console type='pty'>
	I0730 00:37:23.496810  516753 main.go:141] libmachine: (ha-161305-m02)       <target type='serial' port='0'/>
	I0730 00:37:23.496817  516753 main.go:141] libmachine: (ha-161305-m02)     </console>
	I0730 00:37:23.496822  516753 main.go:141] libmachine: (ha-161305-m02)     <rng model='virtio'>
	I0730 00:37:23.496831  516753 main.go:141] libmachine: (ha-161305-m02)       <backend model='random'>/dev/random</backend>
	I0730 00:37:23.496839  516753 main.go:141] libmachine: (ha-161305-m02)     </rng>
	I0730 00:37:23.496843  516753 main.go:141] libmachine: (ha-161305-m02)     
	I0730 00:37:23.496849  516753 main.go:141] libmachine: (ha-161305-m02)     
	I0730 00:37:23.496853  516753 main.go:141] libmachine: (ha-161305-m02)   </devices>
	I0730 00:37:23.496867  516753 main.go:141] libmachine: (ha-161305-m02) </domain>
	I0730 00:37:23.496881  516753 main.go:141] libmachine: (ha-161305-m02) 
	I0730 00:37:23.503402  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:8a:3b:66 in network default
	I0730 00:37:23.503981  516753 main.go:141] libmachine: (ha-161305-m02) Ensuring networks are active...
	I0730 00:37:23.504028  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:23.504678  516753 main.go:141] libmachine: (ha-161305-m02) Ensuring network default is active
	I0730 00:37:23.504984  516753 main.go:141] libmachine: (ha-161305-m02) Ensuring network mk-ha-161305 is active
	I0730 00:37:23.505412  516753 main.go:141] libmachine: (ha-161305-m02) Getting domain xml...
	I0730 00:37:23.506140  516753 main.go:141] libmachine: (ha-161305-m02) Creating domain...
	I0730 00:37:24.738496  516753 main.go:141] libmachine: (ha-161305-m02) Waiting to get IP...
	I0730 00:37:24.739543  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:24.739982  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:24.740011  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:24.739950  517154 retry.go:31] will retry after 240.507777ms: waiting for machine to come up
	I0730 00:37:24.982455  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:24.982949  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:24.982984  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:24.982882  517154 retry.go:31] will retry after 343.734606ms: waiting for machine to come up
	I0730 00:37:25.328448  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:25.328889  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:25.328916  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:25.328857  517154 retry.go:31] will retry after 407.015391ms: waiting for machine to come up
	I0730 00:37:25.737479  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:25.737934  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:25.737985  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:25.737877  517154 retry.go:31] will retry after 553.281612ms: waiting for machine to come up
	I0730 00:37:26.292463  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:26.292914  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:26.292954  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:26.292862  517154 retry.go:31] will retry after 525.274717ms: waiting for machine to come up
	I0730 00:37:26.819274  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:26.819682  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:26.819706  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:26.819621  517154 retry.go:31] will retry after 719.917184ms: waiting for machine to come up
	I0730 00:37:27.541499  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:27.541949  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:27.541988  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:27.541887  517154 retry.go:31] will retry after 759.939347ms: waiting for machine to come up
	I0730 00:37:28.303096  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:28.303451  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:28.303483  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:28.303403  517154 retry.go:31] will retry after 988.04931ms: waiting for machine to come up
	I0730 00:37:29.292885  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:29.293365  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:29.293579  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:29.293295  517154 retry.go:31] will retry after 1.192367296s: waiting for machine to come up
	I0730 00:37:30.486839  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:30.487223  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:30.487280  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:30.487167  517154 retry.go:31] will retry after 1.500364555s: waiting for machine to come up
	I0730 00:37:31.990084  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:31.990732  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:31.990763  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:31.990679  517154 retry.go:31] will retry after 2.339994382s: waiting for machine to come up
	I0730 00:37:34.332879  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:34.333348  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:34.333375  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:34.333309  517154 retry.go:31] will retry after 2.725807557s: waiting for machine to come up
	I0730 00:37:37.061917  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:37.062512  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:37.062543  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:37.062478  517154 retry.go:31] will retry after 3.140725454s: waiting for machine to come up
	I0730 00:37:40.205929  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:40.206301  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find current IP address of domain ha-161305-m02 in network mk-ha-161305
	I0730 00:37:40.206632  516753 main.go:141] libmachine: (ha-161305-m02) DBG | I0730 00:37:40.206544  517154 retry.go:31] will retry after 4.983106137s: waiting for machine to come up
	I0730 00:37:45.191468  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.192006  516753 main.go:141] libmachine: (ha-161305-m02) Found IP for machine: 192.168.39.126
	I0730 00:37:45.192034  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has current primary IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.192048  516753 main.go:141] libmachine: (ha-161305-m02) Reserving static IP address...
	I0730 00:37:45.192619  516753 main.go:141] libmachine: (ha-161305-m02) DBG | unable to find host DHCP lease matching {name: "ha-161305-m02", mac: "52:54:00:44:e3:c9", ip: "192.168.39.126"} in network mk-ha-161305
	I0730 00:37:45.265169  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Getting to WaitForSSH function...
	I0730 00:37:45.265195  516753 main.go:141] libmachine: (ha-161305-m02) Reserved static IP address: 192.168.39.126
	I0730 00:37:45.265208  516753 main.go:141] libmachine: (ha-161305-m02) Waiting for SSH to be available...
	I0730 00:37:45.267760  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.268211  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.268241  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.268480  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Using SSH client type: external
	I0730 00:37:45.268509  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa (-rw-------)
	I0730 00:37:45.268541  516753 main.go:141] libmachine: (ha-161305-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:37:45.268556  516753 main.go:141] libmachine: (ha-161305-m02) DBG | About to run SSH command:
	I0730 00:37:45.268570  516753 main.go:141] libmachine: (ha-161305-m02) DBG | exit 0
	I0730 00:37:45.396779  516753 main.go:141] libmachine: (ha-161305-m02) DBG | SSH cmd err, output: <nil>: 
	I0730 00:37:45.397063  516753 main.go:141] libmachine: (ha-161305-m02) KVM machine creation complete!
	I0730 00:37:45.397374  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetConfigRaw
	I0730 00:37:45.397994  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:45.398219  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:45.398429  516753 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:37:45.398459  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:37:45.399869  516753 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:37:45.399884  516753 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:37:45.399889  516753 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:37:45.399895  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.402275  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.402631  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.402650  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.402780  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.402950  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.403102  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.403242  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.403425  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.403683  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.403699  516753 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:37:45.511797  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:37:45.511819  516753 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:37:45.511827  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.514704  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.515077  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.515112  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.515270  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.515455  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.515651  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.515787  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.515965  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.516162  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.516174  516753 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:37:45.625352  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:37:45.625457  516753 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:37:45.625468  516753 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:37:45.625479  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:45.625801  516753 buildroot.go:166] provisioning hostname "ha-161305-m02"
	I0730 00:37:45.625845  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:45.626078  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.628630  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.629030  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.629059  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.629188  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.629385  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.629597  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.629823  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.630025  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.630232  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.630246  516753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305-m02 && echo "ha-161305-m02" | sudo tee /etc/hostname
	I0730 00:37:45.755899  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305-m02
	
	I0730 00:37:45.755928  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.758701  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.758989  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.759023  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.759147  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.759370  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.759539  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.759676  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.759855  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:45.760059  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:45.760077  516753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:37:45.880889  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:37:45.880927  516753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:37:45.880950  516753 buildroot.go:174] setting up certificates
	I0730 00:37:45.880961  516753 provision.go:84] configureAuth start
	I0730 00:37:45.880973  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetMachineName
	I0730 00:37:45.881272  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:45.883737  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.884115  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.884143  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.884270  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.886533  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.886893  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.886926  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.887058  516753 provision.go:143] copyHostCerts
	I0730 00:37:45.887095  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:37:45.887140  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:37:45.887152  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:37:45.887242  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:37:45.887340  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:37:45.887359  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:37:45.887366  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:37:45.887395  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:37:45.887441  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:37:45.887457  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:37:45.887463  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:37:45.887484  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:37:45.887542  516753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305-m02 san=[127.0.0.1 192.168.39.126 ha-161305-m02 localhost minikube]
	I0730 00:37:45.945115  516753 provision.go:177] copyRemoteCerts
	I0730 00:37:45.945183  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:37:45.945210  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:45.947826  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.948207  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:45.948245  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:45.948393  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:45.948578  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:45.948729  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:45.948853  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.034791  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:37:46.034862  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:37:46.060900  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:37:46.060990  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 00:37:46.086451  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:37:46.086529  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0730 00:37:46.111836  516753 provision.go:87] duration metric: took 230.859762ms to configureAuth
	I0730 00:37:46.111864  516753 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:37:46.112058  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:46.112154  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.115151  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.115532  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.115561  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.115780  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.116013  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.116276  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.116459  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.116668  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:46.116899  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:46.116916  516753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:37:46.384040  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:37:46.384073  516753 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:37:46.384081  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetURL
	I0730 00:37:46.385507  516753 main.go:141] libmachine: (ha-161305-m02) DBG | Using libvirt version 6000000
	I0730 00:37:46.387687  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.388076  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.388101  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.388320  516753 main.go:141] libmachine: Docker is up and running!
	I0730 00:37:46.388337  516753 main.go:141] libmachine: Reticulating splines...
	I0730 00:37:46.388347  516753 client.go:171] duration metric: took 23.657500004s to LocalClient.Create
	I0730 00:37:46.388377  516753 start.go:167] duration metric: took 23.657600459s to libmachine.API.Create "ha-161305"
	I0730 00:37:46.388389  516753 start.go:293] postStartSetup for "ha-161305-m02" (driver="kvm2")
	I0730 00:37:46.388402  516753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:37:46.388424  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.388715  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:37:46.388741  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.391189  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.391580  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.391608  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.391782  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.391983  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.392173  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.392327  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.478242  516753 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:37:46.482085  516753 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:37:46.482110  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:37:46.482179  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:37:46.482248  516753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:37:46.482258  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:37:46.482336  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:37:46.490894  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:37:46.512068  516753 start.go:296] duration metric: took 123.663993ms for postStartSetup
	I0730 00:37:46.512118  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetConfigRaw
	I0730 00:37:46.512763  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:46.515301  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.515641  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.515673  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.515889  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:37:46.516123  516753 start.go:128] duration metric: took 23.806454125s to createHost
	I0730 00:37:46.516151  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.518357  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.518644  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.518673  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.518814  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.519004  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.519177  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.519314  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.519496  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:37:46.519659  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0730 00:37:46.519668  516753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:37:46.629163  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722299866.607970383
	
	I0730 00:37:46.629189  516753 fix.go:216] guest clock: 1722299866.607970383
	I0730 00:37:46.629197  516753 fix.go:229] Guest: 2024-07-30 00:37:46.607970383 +0000 UTC Remote: 2024-07-30 00:37:46.516138998 +0000 UTC m=+77.885961689 (delta=91.831385ms)
	I0730 00:37:46.629214  516753 fix.go:200] guest clock delta is within tolerance: 91.831385ms
	I0730 00:37:46.629219  516753 start.go:83] releasing machines lock for "ha-161305-m02", held for 23.919646347s
	I0730 00:37:46.629241  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.629569  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:46.632152  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.632483  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.632511  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.634971  516753 out.go:177] * Found network options:
	I0730 00:37:46.636255  516753 out.go:177]   - NO_PROXY=192.168.39.80
	W0730 00:37:46.637476  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:37:46.637506  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.638017  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.638219  516753 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:37:46.638307  516753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:37:46.638362  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	W0730 00:37:46.638436  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:37:46.638499  516753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:37:46.638515  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:37:46.640789  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641141  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.641170  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641189  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641264  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.641462  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.641619  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.641632  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:46.641655  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:46.641740  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.641976  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:37:46.642134  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:37:46.642309  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:37:46.642479  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:37:46.883173  516753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:37:46.888907  516753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:37:46.888970  516753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:37:46.904225  516753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:37:46.904255  516753 start.go:495] detecting cgroup driver to use...
	I0730 00:37:46.904346  516753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:37:46.919641  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:37:46.932861  516753 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:37:46.932930  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:37:46.946141  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:37:46.959737  516753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:37:47.076469  516753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:37:47.241858  516753 docker.go:233] disabling docker service ...
	I0730 00:37:47.241925  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:37:47.258144  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:37:47.271355  516753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:37:47.396700  516753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:37:47.511681  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:37:47.525833  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:37:47.542979  516753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:37:47.543058  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.553712  516753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:37:47.553784  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.563932  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.573482  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.583372  516753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:37:47.593240  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.602697  516753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.618421  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:37:47.628078  516753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:37:47.637033  516753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:37:47.637090  516753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:37:47.649603  516753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:37:47.659006  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:37:47.776747  516753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:37:47.910467  516753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:37:47.910554  516753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:37:47.915148  516753 start.go:563] Will wait 60s for crictl version
	I0730 00:37:47.915220  516753 ssh_runner.go:195] Run: which crictl
	I0730 00:37:47.918871  516753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:37:47.955620  516753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:37:47.955720  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:37:47.982020  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:37:48.010734  516753 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:37:48.012141  516753 out.go:177]   - env NO_PROXY=192.168.39.80
	I0730 00:37:48.013340  516753 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:37:48.016450  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:48.016854  516753 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:37:36 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:37:48.016879  516753 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:37:48.017165  516753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:37:48.020973  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:37:48.033388  516753 mustload.go:65] Loading cluster: ha-161305
	I0730 00:37:48.033619  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:37:48.033881  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:48.033921  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:48.049782  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0730 00:37:48.050263  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:48.050728  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:48.050754  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:48.051129  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:48.051377  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:37:48.052993  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:37:48.053326  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:37:48.053368  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:37:48.068216  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0730 00:37:48.068647  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:37:48.069196  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:37:48.069221  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:37:48.069539  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:37:48.069759  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:37:48.069905  516753 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.126
	I0730 00:37:48.069918  516753 certs.go:194] generating shared ca certs ...
	I0730 00:37:48.069938  516753 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:48.070105  516753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:37:48.070152  516753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:37:48.070167  516753 certs.go:256] generating profile certs ...
	I0730 00:37:48.070270  516753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:37:48.070304  516753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8
	I0730 00:37:48.070326  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.254]
	I0730 00:37:48.264363  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8 ...
	I0730 00:37:48.264393  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8: {Name:mk33991990a82d48e58b66a07fc4d399aa40ab4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:48.264605  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8 ...
	I0730 00:37:48.264627  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8: {Name:mk2fbb9322662bb735800bbd51301531f9faa956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:37:48.264752  516753 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.4fb5d8e8 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:37:48.264928  516753 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.4fb5d8e8 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:37:48.265125  516753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:37:48.265144  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:37:48.265163  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:37:48.265185  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:37:48.265202  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:37:48.265220  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:37:48.265236  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:37:48.265255  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:37:48.265277  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:37:48.265342  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:37:48.265388  516753 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:37:48.265404  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:37:48.265439  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:37:48.265470  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:37:48.265502  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:37:48.265556  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:37:48.265591  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.265610  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.265631  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.265676  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:37:48.268648  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:48.269155  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:37:48.269189  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:37:48.269375  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:37:48.269577  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:37:48.269733  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:37:48.269858  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:37:48.345124  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0730 00:37:48.349937  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0730 00:37:48.364526  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0730 00:37:48.371053  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0730 00:37:48.381684  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0730 00:37:48.385967  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0730 00:37:48.396039  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0730 00:37:48.399905  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0730 00:37:48.415622  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0730 00:37:48.419701  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0730 00:37:48.429555  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0730 00:37:48.433651  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0730 00:37:48.442848  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:37:48.466298  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:37:48.488792  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:37:48.510960  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:37:48.532855  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0730 00:37:48.555244  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:37:48.577909  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:37:48.599778  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:37:48.621320  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:37:48.644196  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:37:48.665794  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:37:48.687936  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0730 00:37:48.703150  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0730 00:37:48.718210  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0730 00:37:48.733165  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0730 00:37:48.748526  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0730 00:37:48.763541  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0730 00:37:48.778980  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0730 00:37:48.794361  516753 ssh_runner.go:195] Run: openssl version
	I0730 00:37:48.800358  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:37:48.810540  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.814929  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.815002  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:37:48.820814  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:37:48.831108  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:37:48.841250  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.845266  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.845330  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:37:48.850667  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:37:48.860636  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:37:48.871385  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.875694  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.875774  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:37:48.881549  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:37:48.891947  516753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:37:48.896014  516753 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:37:48.896067  516753 kubeadm.go:934] updating node {m02 192.168.39.126 8443 v1.30.3 crio true true} ...
	I0730 00:37:48.896180  516753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:37:48.896213  516753 kube-vip.go:115] generating kube-vip config ...
	I0730 00:37:48.896248  516753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:37:48.914384  516753 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:37:48.914459  516753 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:37:48.914512  516753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:37:48.924356  516753 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0730 00:37:48.924415  516753 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0730 00:37:48.933719  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0730 00:37:48.933750  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:37:48.933799  516753 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0730 00:37:48.933830  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:37:48.933829  516753 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0730 00:37:48.938214  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0730 00:37:48.938241  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0730 00:37:50.201661  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:37:50.201754  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:37:50.206398  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0730 00:37:50.206432  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0730 00:38:00.365092  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:38:00.379359  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:38:00.379482  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:38:00.383648  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0730 00:38:00.383682  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0730 00:38:00.760191  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0730 00:38:00.769469  516753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0730 00:38:00.784857  516753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:38:00.800208  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:38:00.815904  516753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:38:00.819814  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:38:00.831159  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:38:00.936912  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:38:00.953783  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:38:00.954274  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:38:00.954344  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:38:00.970596  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39843
	I0730 00:38:00.971114  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:38:00.971590  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:38:00.971615  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:38:00.971950  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:38:00.972146  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:38:00.972333  516753 start.go:317] joinCluster: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:38:00.972476  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0730 00:38:00.972496  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:38:00.975638  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:38:00.976172  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:38:00.976205  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:38:00.976381  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:38:00.976565  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:38:00.976728  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:38:00.976868  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:38:01.132634  516753 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:38:01.132688  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rrahj6.2pnfdyo0jftsl9jl --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m02 --control-plane --apiserver-advertise-address=192.168.39.126 --apiserver-bind-port=8443"
	I0730 00:38:22.418746  516753 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rrahj6.2pnfdyo0jftsl9jl --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m02 --control-plane --apiserver-advertise-address=192.168.39.126 --apiserver-bind-port=8443": (21.286013651s)
	I0730 00:38:22.418787  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0730 00:38:22.917683  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-161305-m02 minikube.k8s.io/updated_at=2024_07_30T00_38_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=ha-161305 minikube.k8s.io/primary=false
	I0730 00:38:23.062014  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-161305-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0730 00:38:23.209584  516753 start.go:319] duration metric: took 22.237244485s to joinCluster
	I0730 00:38:23.209680  516753 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:38:23.210031  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:38:23.211642  516753 out.go:177] * Verifying Kubernetes components...
	I0730 00:38:23.213608  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:38:23.456752  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:38:23.500437  516753 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:38:23.500816  516753 kapi.go:59] client config for ha-161305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0730 00:38:23.500908  516753 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.80:8443
	I0730 00:38:23.501191  516753 node_ready.go:35] waiting up to 6m0s for node "ha-161305-m02" to be "Ready" ...
	I0730 00:38:23.501312  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:23.501323  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:23.501334  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:23.501339  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:23.512730  516753 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0730 00:38:24.002037  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:24.002068  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:24.002079  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:24.002085  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:24.006031  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:24.501999  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:24.502031  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:24.502044  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:24.502054  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:24.505529  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:25.001766  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:25.001800  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:25.001809  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:25.001823  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:25.004247  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:25.501994  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:25.502032  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:25.502040  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:25.502045  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:25.504991  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:25.505508  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:26.002000  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:26.002026  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:26.002037  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:26.002042  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:26.005495  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:26.501953  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:26.501977  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:26.501989  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:26.501997  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:26.504628  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:27.002204  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:27.002229  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:27.002238  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:27.002242  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:27.005498  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:27.502259  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:27.502282  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:27.502294  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:27.502307  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:27.506707  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:27.507272  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:28.001741  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:28.001770  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:28.001781  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:28.001786  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:28.004728  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:28.501501  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:28.501528  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:28.501541  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:28.501547  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:28.509465  516753 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0730 00:38:29.001523  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:29.001549  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:29.001559  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:29.001564  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:29.004868  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:29.501939  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:29.501962  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:29.501973  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:29.501978  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:29.505207  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:30.001980  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:30.002005  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:30.002016  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:30.002022  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:30.011446  516753 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0730 00:38:30.012222  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:30.501482  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:30.501505  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:30.501513  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:30.501518  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:30.505183  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:31.001829  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:31.001854  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:31.001863  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:31.001867  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:31.005255  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:31.502256  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:31.502290  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:31.502298  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:31.502302  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:31.505718  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:32.001855  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:32.001882  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:32.001890  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:32.001893  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:32.004578  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:32.501604  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:32.501628  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:32.501636  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:32.501640  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:32.506132  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:32.507080  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:33.001978  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:33.002005  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:33.002017  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:33.002025  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:33.004870  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:33.501714  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:33.501740  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:33.501751  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:33.501758  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:33.505143  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:34.001577  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:34.001600  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:34.001608  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:34.001612  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:34.004836  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:34.501626  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:34.501649  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:34.501658  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:34.501662  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:34.504935  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:35.001777  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:35.001802  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:35.001810  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:35.001815  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:35.005102  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:35.005677  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:35.502200  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:35.502229  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:35.502237  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:35.502242  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:35.505721  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:36.001935  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:36.001958  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:36.001967  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:36.001973  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:36.004951  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:36.501883  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:36.501909  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:36.501919  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:36.501923  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:36.504933  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:37.001968  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:37.001991  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:37.002000  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:37.002005  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:37.005496  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:37.006104  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:37.501504  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:37.501532  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:37.501544  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:37.501552  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:37.509457  516753 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0730 00:38:38.001841  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:38.001871  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:38.001883  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:38.001890  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:38.004961  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:38.502203  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:38.502229  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:38.502241  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:38.502245  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:38.505616  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:39.001465  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:39.001492  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:39.001504  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:39.001509  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:39.004565  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:39.501984  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:39.502008  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:39.502017  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:39.502022  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:39.506144  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:39.506677  516753 node_ready.go:53] node "ha-161305-m02" has status "Ready":"False"
	I0730 00:38:40.001989  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:40.002014  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:40.002023  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:40.002028  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:40.005478  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:40.501415  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:40.501443  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:40.501451  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:40.501456  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:40.504719  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.001420  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.001446  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.001454  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.001457  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.004900  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.005728  516753 node_ready.go:49] node "ha-161305-m02" has status "Ready":"True"
	I0730 00:38:41.005750  516753 node_ready.go:38] duration metric: took 17.504538043s for node "ha-161305-m02" to be "Ready" ...
	I0730 00:38:41.005761  516753 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:38:41.005842  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:41.005851  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.005859  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.005864  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.011197  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:38:41.017726  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.017834  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bdpds
	I0730 00:38:41.017846  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.017857  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.017866  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.020518  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.021113  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.021133  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.021144  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.021152  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.023445  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.023923  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.023943  516753 pod_ready.go:81] duration metric: took 6.186327ms for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.023954  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.024027  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mzcln
	I0730 00:38:41.024037  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.024056  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.024062  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.026332  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.026862  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.026877  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.026884  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.026888  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.029264  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.029611  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.029628  516753 pod_ready.go:81] duration metric: took 5.666334ms for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.029636  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.029682  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305
	I0730 00:38:41.029689  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.029695  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.029700  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.031918  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.032497  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.032511  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.032516  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.032520  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.034666  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.035250  516753 pod_ready.go:92] pod "etcd-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.035266  516753 pod_ready.go:81] duration metric: took 5.624064ms for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.035273  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.035321  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m02
	I0730 00:38:41.035336  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.035343  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.035351  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.037615  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:41.038037  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.038050  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.038057  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.038061  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.040015  516753 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0730 00:38:41.040500  516753 pod_ready.go:92] pod "etcd-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.040515  516753 pod_ready.go:81] duration metric: took 5.236235ms for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.040531  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.201917  516753 request.go:629] Waited for 161.295825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:38:41.201992  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:38:41.202000  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.202012  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.202021  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.205243  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.402216  516753 request.go:629] Waited for 196.372053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.402316  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:41.402333  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.402346  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.402357  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.405528  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.406031  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.406052  516753 pod_ready.go:81] duration metric: took 365.510849ms for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.406062  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.602228  516753 request.go:629] Waited for 196.071289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:38:41.602302  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:38:41.602307  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.602315  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.602318  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.606019  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:41.802184  516753 request.go:629] Waited for 195.17089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.802258  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:41.802263  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:41.802272  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:41.802277  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:41.806358  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:38:41.807015  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:41.807034  516753 pod_ready.go:81] duration metric: took 400.962679ms for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:41.807044  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.002140  516753 request.go:629] Waited for 195.026927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:38:42.002207  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:38:42.002212  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.002220  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.002224  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.005889  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.201895  516753 request.go:629] Waited for 195.278311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:42.201969  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:42.201976  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.201987  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.201997  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.205486  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.205952  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:42.205978  516753 pod_ready.go:81] duration metric: took 398.925824ms for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.205993  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.402065  516753 request.go:629] Waited for 195.954248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:38:42.402136  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:38:42.402142  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.402149  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.402153  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.405638  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.601809  516753 request.go:629] Waited for 195.422281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:42.601914  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:42.601927  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.601938  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.601948  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.605364  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:42.606017  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:42.606041  516753 pod_ready.go:81] duration metric: took 400.038029ms for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.606056  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:42.801620  516753 request.go:629] Waited for 195.4652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:38:42.801683  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:38:42.801688  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:42.801695  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:42.801702  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:42.805510  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.001427  516753 request.go:629] Waited for 195.290569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:43.001505  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:43.001513  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.001521  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.001544  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.004506  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:43.004992  516753 pod_ready.go:92] pod "kube-proxy-pqr2f" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:43.005016  516753 pod_ready.go:81] duration metric: took 398.948113ms for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.005032  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.202074  516753 request.go:629] Waited for 196.947057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:38:43.202148  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:38:43.202158  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.202170  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.202178  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.205936  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.402047  516753 request.go:629] Waited for 195.413267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.402121  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.402128  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.402139  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.402149  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.405264  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.405841  516753 pod_ready.go:92] pod "kube-proxy-wptvn" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:43.405862  516753 pod_ready.go:81] duration metric: took 400.816309ms for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.405872  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.602026  516753 request.go:629] Waited for 196.080796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:38:43.602120  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:38:43.602130  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.602144  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.602153  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.605247  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.801655  516753 request.go:629] Waited for 195.834831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.801738  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:38:43.801750  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:43.801762  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:43.801773  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:43.805279  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:43.805732  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:43.805750  516753 pod_ready.go:81] duration metric: took 399.871741ms for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:43.805760  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:44.001902  516753 request.go:629] Waited for 196.042949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:38:44.002008  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:38:44.002017  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.002027  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.002032  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.005331  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:44.202261  516753 request.go:629] Waited for 196.386792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:44.202331  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:38:44.202337  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.202344  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.202349  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.204873  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:38:44.205415  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:38:44.205444  516753 pod_ready.go:81] duration metric: took 399.675361ms for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:38:44.205456  516753 pod_ready.go:38] duration metric: took 3.199683199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:38:44.205471  516753 api_server.go:52] waiting for apiserver process to appear ...
	I0730 00:38:44.205531  516753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:38:44.222886  516753 api_server.go:72] duration metric: took 21.013159331s to wait for apiserver process to appear ...
	I0730 00:38:44.222912  516753 api_server.go:88] waiting for apiserver healthz status ...
	I0730 00:38:44.222932  516753 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0730 00:38:44.227033  516753 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0730 00:38:44.227134  516753 round_trippers.go:463] GET https://192.168.39.80:8443/version
	I0730 00:38:44.227147  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.227158  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.227167  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.227905  516753 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0730 00:38:44.228004  516753 api_server.go:141] control plane version: v1.30.3
	I0730 00:38:44.228021  516753 api_server.go:131] duration metric: took 5.102431ms to wait for apiserver health ...
	I0730 00:38:44.228029  516753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 00:38:44.402481  516753 request.go:629] Waited for 174.34802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.402543  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.402549  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.402566  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.402574  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.410169  516753 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0730 00:38:44.415952  516753 system_pods.go:59] 17 kube-system pods found
	I0730 00:38:44.416000  516753 system_pods.go:61] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:38:44.416008  516753 system_pods.go:61] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:38:44.416012  516753 system_pods.go:61] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:38:44.416016  516753 system_pods.go:61] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:38:44.416020  516753 system_pods.go:61] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:38:44.416024  516753 system_pods.go:61] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:38:44.416029  516753 system_pods.go:61] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:38:44.416044  516753 system_pods.go:61] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:38:44.416050  516753 system_pods.go:61] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:38:44.416060  516753 system_pods.go:61] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:38:44.416065  516753 system_pods.go:61] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:38:44.416067  516753 system_pods.go:61] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:38:44.416071  516753 system_pods.go:61] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:38:44.416075  516753 system_pods.go:61] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:38:44.416080  516753 system_pods.go:61] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:38:44.416083  516753 system_pods.go:61] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:38:44.416089  516753 system_pods.go:61] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:38:44.416096  516753 system_pods.go:74] duration metric: took 188.053859ms to wait for pod list to return data ...
	I0730 00:38:44.416107  516753 default_sa.go:34] waiting for default service account to be created ...
	I0730 00:38:44.601552  516753 request.go:629] Waited for 185.33914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:38:44.601625  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:38:44.601631  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.601639  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.601647  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.604843  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:44.605108  516753 default_sa.go:45] found service account: "default"
	I0730 00:38:44.605129  516753 default_sa.go:55] duration metric: took 189.010974ms for default service account to be created ...
	I0730 00:38:44.605139  516753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 00:38:44.801526  516753 request.go:629] Waited for 196.303267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.801618  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:38:44.801624  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:44.801631  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:44.801636  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:44.806715  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:38:44.810529  516753 system_pods.go:86] 17 kube-system pods found
	I0730 00:38:44.810555  516753 system_pods.go:89] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:38:44.810561  516753 system_pods.go:89] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:38:44.810565  516753 system_pods.go:89] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:38:44.810570  516753 system_pods.go:89] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:38:44.810574  516753 system_pods.go:89] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:38:44.810578  516753 system_pods.go:89] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:38:44.810585  516753 system_pods.go:89] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:38:44.810589  516753 system_pods.go:89] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:38:44.810596  516753 system_pods.go:89] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:38:44.810600  516753 system_pods.go:89] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:38:44.810607  516753 system_pods.go:89] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:38:44.810610  516753 system_pods.go:89] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:38:44.810614  516753 system_pods.go:89] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:38:44.810619  516753 system_pods.go:89] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:38:44.810623  516753 system_pods.go:89] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:38:44.810627  516753 system_pods.go:89] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:38:44.810630  516753 system_pods.go:89] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:38:44.810637  516753 system_pods.go:126] duration metric: took 205.489759ms to wait for k8s-apps to be running ...
	I0730 00:38:44.810660  516753 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 00:38:44.810712  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:38:44.823949  516753 system_svc.go:56] duration metric: took 13.278644ms WaitForService to wait for kubelet
	I0730 00:38:44.823982  516753 kubeadm.go:582] duration metric: took 21.614261776s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:38:44.824007  516753 node_conditions.go:102] verifying NodePressure condition ...
	I0730 00:38:45.002457  516753 request.go:629] Waited for 178.352962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes
	I0730 00:38:45.002519  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes
	I0730 00:38:45.002524  516753 round_trippers.go:469] Request Headers:
	I0730 00:38:45.002532  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:38:45.002540  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:38:45.006051  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:38:45.006821  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:38:45.006845  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:38:45.006857  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:38:45.006861  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:38:45.006867  516753 node_conditions.go:105] duration metric: took 182.855378ms to run NodePressure ...
	I0730 00:38:45.006882  516753 start.go:241] waiting for startup goroutines ...
	I0730 00:38:45.006908  516753 start.go:255] writing updated cluster config ...
	I0730 00:38:45.009162  516753 out.go:177] 
	I0730 00:38:45.010675  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:38:45.010761  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:38:45.012437  516753 out.go:177] * Starting "ha-161305-m03" control-plane node in "ha-161305" cluster
	I0730 00:38:45.013676  516753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:38:45.013705  516753 cache.go:56] Caching tarball of preloaded images
	I0730 00:38:45.013831  516753 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:38:45.013845  516753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:38:45.013955  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:38:45.014155  516753 start.go:360] acquireMachinesLock for ha-161305-m03: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:38:45.014211  516753 start.go:364] duration metric: took 33.65µs to acquireMachinesLock for "ha-161305-m03"
	I0730 00:38:45.014237  516753 start.go:93] Provisioning new machine with config: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:38:45.014356  516753 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0730 00:38:45.015921  516753 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 00:38:45.016012  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:38:45.016057  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:38:45.031210  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0730 00:38:45.031641  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:38:45.032115  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:38:45.032137  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:38:45.032535  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:38:45.032769  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:38:45.033003  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:38:45.033265  516753 start.go:159] libmachine.API.Create for "ha-161305" (driver="kvm2")
	I0730 00:38:45.033307  516753 client.go:168] LocalClient.Create starting
	I0730 00:38:45.033349  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 00:38:45.033389  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:38:45.033405  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:38:45.033462  516753 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 00:38:45.033480  516753 main.go:141] libmachine: Decoding PEM data...
	I0730 00:38:45.033491  516753 main.go:141] libmachine: Parsing certificate...
	I0730 00:38:45.033507  516753 main.go:141] libmachine: Running pre-create checks...
	I0730 00:38:45.033515  516753 main.go:141] libmachine: (ha-161305-m03) Calling .PreCreateCheck
	I0730 00:38:45.033717  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetConfigRaw
	I0730 00:38:45.034134  516753 main.go:141] libmachine: Creating machine...
	I0730 00:38:45.034146  516753 main.go:141] libmachine: (ha-161305-m03) Calling .Create
	I0730 00:38:45.034286  516753 main.go:141] libmachine: (ha-161305-m03) Creating KVM machine...
	I0730 00:38:45.035837  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found existing default KVM network
	I0730 00:38:45.036001  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found existing private KVM network mk-ha-161305
	I0730 00:38:45.036142  516753 main.go:141] libmachine: (ha-161305-m03) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03 ...
	I0730 00:38:45.036167  516753 main.go:141] libmachine: (ha-161305-m03) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:38:45.036211  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.036113  517582 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:38:45.036301  516753 main.go:141] libmachine: (ha-161305-m03) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 00:38:45.304450  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.304320  517582 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa...
	I0730 00:38:45.384479  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.384323  517582 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/ha-161305-m03.rawdisk...
	I0730 00:38:45.384520  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Writing magic tar header
	I0730 00:38:45.384540  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Writing SSH key tar header
	I0730 00:38:45.384552  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:45.384447  517582 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03 ...
	I0730 00:38:45.384568  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03
	I0730 00:38:45.384646  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 00:38:45.384673  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03 (perms=drwx------)
	I0730 00:38:45.384682  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:38:45.384730  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 00:38:45.384758  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 00:38:45.384769  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 00:38:45.384781  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 00:38:45.384790  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 00:38:45.384819  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 00:38:45.384845  516753 main.go:141] libmachine: (ha-161305-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 00:38:45.384857  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home/jenkins
	I0730 00:38:45.384871  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Checking permissions on dir: /home
	I0730 00:38:45.384883  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Skipping /home - not owner
	I0730 00:38:45.384901  516753 main.go:141] libmachine: (ha-161305-m03) Creating domain...
	I0730 00:38:45.385805  516753 main.go:141] libmachine: (ha-161305-m03) define libvirt domain using xml: 
	I0730 00:38:45.385825  516753 main.go:141] libmachine: (ha-161305-m03) <domain type='kvm'>
	I0730 00:38:45.385833  516753 main.go:141] libmachine: (ha-161305-m03)   <name>ha-161305-m03</name>
	I0730 00:38:45.385841  516753 main.go:141] libmachine: (ha-161305-m03)   <memory unit='MiB'>2200</memory>
	I0730 00:38:45.385846  516753 main.go:141] libmachine: (ha-161305-m03)   <vcpu>2</vcpu>
	I0730 00:38:45.385854  516753 main.go:141] libmachine: (ha-161305-m03)   <features>
	I0730 00:38:45.385870  516753 main.go:141] libmachine: (ha-161305-m03)     <acpi/>
	I0730 00:38:45.385880  516753 main.go:141] libmachine: (ha-161305-m03)     <apic/>
	I0730 00:38:45.385888  516753 main.go:141] libmachine: (ha-161305-m03)     <pae/>
	I0730 00:38:45.385895  516753 main.go:141] libmachine: (ha-161305-m03)     
	I0730 00:38:45.385907  516753 main.go:141] libmachine: (ha-161305-m03)   </features>
	I0730 00:38:45.385916  516753 main.go:141] libmachine: (ha-161305-m03)   <cpu mode='host-passthrough'>
	I0730 00:38:45.385921  516753 main.go:141] libmachine: (ha-161305-m03)   
	I0730 00:38:45.385927  516753 main.go:141] libmachine: (ha-161305-m03)   </cpu>
	I0730 00:38:45.385950  516753 main.go:141] libmachine: (ha-161305-m03)   <os>
	I0730 00:38:45.385974  516753 main.go:141] libmachine: (ha-161305-m03)     <type>hvm</type>
	I0730 00:38:45.385988  516753 main.go:141] libmachine: (ha-161305-m03)     <boot dev='cdrom'/>
	I0730 00:38:45.385999  516753 main.go:141] libmachine: (ha-161305-m03)     <boot dev='hd'/>
	I0730 00:38:45.386010  516753 main.go:141] libmachine: (ha-161305-m03)     <bootmenu enable='no'/>
	I0730 00:38:45.386020  516753 main.go:141] libmachine: (ha-161305-m03)   </os>
	I0730 00:38:45.386030  516753 main.go:141] libmachine: (ha-161305-m03)   <devices>
	I0730 00:38:45.386038  516753 main.go:141] libmachine: (ha-161305-m03)     <disk type='file' device='cdrom'>
	I0730 00:38:45.386072  516753 main.go:141] libmachine: (ha-161305-m03)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/boot2docker.iso'/>
	I0730 00:38:45.386097  516753 main.go:141] libmachine: (ha-161305-m03)       <target dev='hdc' bus='scsi'/>
	I0730 00:38:45.386108  516753 main.go:141] libmachine: (ha-161305-m03)       <readonly/>
	I0730 00:38:45.386119  516753 main.go:141] libmachine: (ha-161305-m03)     </disk>
	I0730 00:38:45.386132  516753 main.go:141] libmachine: (ha-161305-m03)     <disk type='file' device='disk'>
	I0730 00:38:45.386149  516753 main.go:141] libmachine: (ha-161305-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 00:38:45.386166  516753 main.go:141] libmachine: (ha-161305-m03)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/ha-161305-m03.rawdisk'/>
	I0730 00:38:45.386180  516753 main.go:141] libmachine: (ha-161305-m03)       <target dev='hda' bus='virtio'/>
	I0730 00:38:45.386191  516753 main.go:141] libmachine: (ha-161305-m03)     </disk>
	I0730 00:38:45.386201  516753 main.go:141] libmachine: (ha-161305-m03)     <interface type='network'>
	I0730 00:38:45.386211  516753 main.go:141] libmachine: (ha-161305-m03)       <source network='mk-ha-161305'/>
	I0730 00:38:45.386226  516753 main.go:141] libmachine: (ha-161305-m03)       <model type='virtio'/>
	I0730 00:38:45.386236  516753 main.go:141] libmachine: (ha-161305-m03)     </interface>
	I0730 00:38:45.386247  516753 main.go:141] libmachine: (ha-161305-m03)     <interface type='network'>
	I0730 00:38:45.386262  516753 main.go:141] libmachine: (ha-161305-m03)       <source network='default'/>
	I0730 00:38:45.386273  516753 main.go:141] libmachine: (ha-161305-m03)       <model type='virtio'/>
	I0730 00:38:45.386290  516753 main.go:141] libmachine: (ha-161305-m03)     </interface>
	I0730 00:38:45.386308  516753 main.go:141] libmachine: (ha-161305-m03)     <serial type='pty'>
	I0730 00:38:45.386322  516753 main.go:141] libmachine: (ha-161305-m03)       <target port='0'/>
	I0730 00:38:45.386332  516753 main.go:141] libmachine: (ha-161305-m03)     </serial>
	I0730 00:38:45.386344  516753 main.go:141] libmachine: (ha-161305-m03)     <console type='pty'>
	I0730 00:38:45.386355  516753 main.go:141] libmachine: (ha-161305-m03)       <target type='serial' port='0'/>
	I0730 00:38:45.386365  516753 main.go:141] libmachine: (ha-161305-m03)     </console>
	I0730 00:38:45.386377  516753 main.go:141] libmachine: (ha-161305-m03)     <rng model='virtio'>
	I0730 00:38:45.386386  516753 main.go:141] libmachine: (ha-161305-m03)       <backend model='random'>/dev/random</backend>
	I0730 00:38:45.386396  516753 main.go:141] libmachine: (ha-161305-m03)     </rng>
	I0730 00:38:45.386408  516753 main.go:141] libmachine: (ha-161305-m03)     
	I0730 00:38:45.386418  516753 main.go:141] libmachine: (ha-161305-m03)     
	I0730 00:38:45.386432  516753 main.go:141] libmachine: (ha-161305-m03)   </devices>
	I0730 00:38:45.386445  516753 main.go:141] libmachine: (ha-161305-m03) </domain>
	I0730 00:38:45.386454  516753 main.go:141] libmachine: (ha-161305-m03) 
	I0730 00:38:45.393444  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:17:86:3b in network default
	I0730 00:38:45.394024  516753 main.go:141] libmachine: (ha-161305-m03) Ensuring networks are active...
	I0730 00:38:45.394047  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:45.394772  516753 main.go:141] libmachine: (ha-161305-m03) Ensuring network default is active
	I0730 00:38:45.394991  516753 main.go:141] libmachine: (ha-161305-m03) Ensuring network mk-ha-161305 is active
	I0730 00:38:45.395403  516753 main.go:141] libmachine: (ha-161305-m03) Getting domain xml...
	I0730 00:38:45.396108  516753 main.go:141] libmachine: (ha-161305-m03) Creating domain...
	I0730 00:38:46.631653  516753 main.go:141] libmachine: (ha-161305-m03) Waiting to get IP...
	I0730 00:38:46.632600  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:46.633076  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:46.633104  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:46.633057  517582 retry.go:31] will retry after 251.235798ms: waiting for machine to come up
	I0730 00:38:46.885588  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:46.885991  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:46.886025  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:46.885933  517582 retry.go:31] will retry after 331.91891ms: waiting for machine to come up
	I0730 00:38:47.219503  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:47.219871  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:47.219898  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:47.219824  517582 retry.go:31] will retry after 463.441174ms: waiting for machine to come up
	I0730 00:38:47.684510  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:47.684934  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:47.684957  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:47.684908  517582 retry.go:31] will retry after 367.134484ms: waiting for machine to come up
	I0730 00:38:48.053448  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:48.053963  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:48.053998  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:48.053906  517582 retry.go:31] will retry after 592.153453ms: waiting for machine to come up
	I0730 00:38:48.647392  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:48.647853  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:48.647880  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:48.647791  517582 retry.go:31] will retry after 808.251785ms: waiting for machine to come up
	I0730 00:38:49.457338  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:49.457669  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:49.457705  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:49.457626  517582 retry.go:31] will retry after 1.15599727s: waiting for machine to come up
	I0730 00:38:50.615145  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:50.615601  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:50.615622  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:50.615575  517582 retry.go:31] will retry after 1.157106732s: waiting for machine to come up
	I0730 00:38:51.773825  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:51.774237  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:51.774266  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:51.774183  517582 retry.go:31] will retry after 1.822875974s: waiting for machine to come up
	I0730 00:38:53.598782  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:53.599392  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:53.599422  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:53.599335  517582 retry.go:31] will retry after 2.16104532s: waiting for machine to come up
	I0730 00:38:55.762546  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:55.763013  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:55.763044  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:55.762969  517582 retry.go:31] will retry after 2.04317933s: waiting for machine to come up
	I0730 00:38:57.807343  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:38:57.807731  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:38:57.807754  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:38:57.807683  517582 retry.go:31] will retry after 3.113783261s: waiting for machine to come up
	I0730 00:39:00.923093  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:00.923591  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find current IP address of domain ha-161305-m03 in network mk-ha-161305
	I0730 00:39:00.923625  516753 main.go:141] libmachine: (ha-161305-m03) DBG | I0730 00:39:00.923538  517582 retry.go:31] will retry after 3.618921973s: waiting for machine to come up
	I0730 00:39:04.545762  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.546279  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has current primary IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.546313  516753 main.go:141] libmachine: (ha-161305-m03) Found IP for machine: 192.168.39.23
	I0730 00:39:04.546364  516753 main.go:141] libmachine: (ha-161305-m03) Reserving static IP address...
	I0730 00:39:04.546793  516753 main.go:141] libmachine: (ha-161305-m03) DBG | unable to find host DHCP lease matching {name: "ha-161305-m03", mac: "52:54:00:e7:c4:d8", ip: "192.168.39.23"} in network mk-ha-161305
	I0730 00:39:04.622641  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Getting to WaitForSSH function...
	I0730 00:39:04.622677  516753 main.go:141] libmachine: (ha-161305-m03) Reserved static IP address: 192.168.39.23
	I0730 00:39:04.622690  516753 main.go:141] libmachine: (ha-161305-m03) Waiting for SSH to be available...
	I0730 00:39:04.625419  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.625849  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.625894  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.626108  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Using SSH client type: external
	I0730 00:39:04.626139  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa (-rw-------)
	I0730 00:39:04.626169  516753 main.go:141] libmachine: (ha-161305-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 00:39:04.626181  516753 main.go:141] libmachine: (ha-161305-m03) DBG | About to run SSH command:
	I0730 00:39:04.626197  516753 main.go:141] libmachine: (ha-161305-m03) DBG | exit 0
	I0730 00:39:04.752761  516753 main.go:141] libmachine: (ha-161305-m03) DBG | SSH cmd err, output: <nil>: 
	I0730 00:39:04.753145  516753 main.go:141] libmachine: (ha-161305-m03) KVM machine creation complete!
	I0730 00:39:04.753483  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetConfigRaw
	I0730 00:39:04.754205  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:04.754443  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:04.754629  516753 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 00:39:04.754646  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:39:04.756020  516753 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 00:39:04.756037  516753 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 00:39:04.756045  516753 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 00:39:04.756054  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:04.758362  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.758708  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.758741  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.758835  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:04.759044  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.759222  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.759369  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:04.759575  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:04.759805  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:04.759819  516753 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 00:39:04.863976  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:39:04.864005  516753 main.go:141] libmachine: Detecting the provisioner...
	I0730 00:39:04.864012  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:04.867492  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.868000  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.868032  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.868215  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:04.868409  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.868584  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.868750  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:04.868945  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:04.869116  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:04.869126  516753 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 00:39:04.973058  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 00:39:04.973138  516753 main.go:141] libmachine: found compatible host: buildroot
	I0730 00:39:04.973148  516753 main.go:141] libmachine: Provisioning with buildroot...
	I0730 00:39:04.973157  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:39:04.973453  516753 buildroot.go:166] provisioning hostname "ha-161305-m03"
	I0730 00:39:04.973483  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:39:04.973700  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:04.976343  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.976695  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:04.976748  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:04.976917  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:04.977127  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.977296  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:04.977500  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:04.977692  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:04.977887  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:04.977902  516753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305-m03 && echo "ha-161305-m03" | sudo tee /etc/hostname
	I0730 00:39:05.098460  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305-m03
	
	I0730 00:39:05.098495  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.101323  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.101703  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.101731  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.101934  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.102170  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.102360  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.102522  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.102711  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:05.102923  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:05.102940  516753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:39:05.220395  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:39:05.220440  516753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:39:05.220467  516753 buildroot.go:174] setting up certificates
	I0730 00:39:05.220481  516753 provision.go:84] configureAuth start
	I0730 00:39:05.220496  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetMachineName
	I0730 00:39:05.220829  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:05.223171  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.223547  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.223573  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.223736  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.226024  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.226412  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.226435  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.226602  516753 provision.go:143] copyHostCerts
	I0730 00:39:05.226637  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:39:05.226688  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:39:05.226707  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:39:05.226793  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:39:05.226889  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:39:05.226916  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:39:05.226926  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:39:05.226965  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:39:05.227032  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:39:05.227055  516753 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:39:05.227064  516753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:39:05.227095  516753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:39:05.227166  516753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305-m03 san=[127.0.0.1 192.168.39.23 ha-161305-m03 localhost minikube]
	I0730 00:39:05.282372  516753 provision.go:177] copyRemoteCerts
	I0730 00:39:05.282436  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:39:05.282463  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.285547  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.285901  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.285931  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.286184  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.286417  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.286607  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.286757  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:05.371512  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:39:05.371617  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:39:05.396955  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:39:05.397049  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 00:39:05.419732  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:39:05.419815  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:39:05.441378  516753 provision.go:87] duration metric: took 220.880297ms to configureAuth
	I0730 00:39:05.441410  516753 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:39:05.441675  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:39:05.441767  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.444532  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.444901  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.444928  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.445121  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.445349  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.445556  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.445714  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.445916  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:05.446080  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:05.446095  516753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:39:05.710472  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:39:05.710500  516753 main.go:141] libmachine: Checking connection to Docker...
	I0730 00:39:05.710508  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetURL
	I0730 00:39:05.711727  516753 main.go:141] libmachine: (ha-161305-m03) DBG | Using libvirt version 6000000
	I0730 00:39:05.715119  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.715632  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.715658  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.715826  516753 main.go:141] libmachine: Docker is up and running!
	I0730 00:39:05.715842  516753 main.go:141] libmachine: Reticulating splines...
	I0730 00:39:05.715850  516753 client.go:171] duration metric: took 20.682531918s to LocalClient.Create
	I0730 00:39:05.715875  516753 start.go:167] duration metric: took 20.682615707s to libmachine.API.Create "ha-161305"
	I0730 00:39:05.715882  516753 start.go:293] postStartSetup for "ha-161305-m03" (driver="kvm2")
	I0730 00:39:05.715892  516753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:39:05.715908  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.716143  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:39:05.716174  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.718445  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.718857  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.718884  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.719053  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.719256  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.719449  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.719603  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:05.806900  516753 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:39:05.810896  516753 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:39:05.810921  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:39:05.810980  516753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:39:05.811076  516753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:39:05.811087  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:39:05.811169  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:39:05.819685  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:39:05.841872  516753 start.go:296] duration metric: took 125.975471ms for postStartSetup
	I0730 00:39:05.841926  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetConfigRaw
	I0730 00:39:05.842548  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:05.845348  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.845781  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.845807  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.846198  516753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:39:05.846441  516753 start.go:128] duration metric: took 20.832069779s to createHost
	I0730 00:39:05.846474  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.848982  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.849383  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.849412  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.849571  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.849769  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.849938  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.850086  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.850284  516753 main.go:141] libmachine: Using SSH client type: native
	I0730 00:39:05.850456  516753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0730 00:39:05.850466  516753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:39:05.957277  516753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722299945.928805840
	
	I0730 00:39:05.957310  516753 fix.go:216] guest clock: 1722299945.928805840
	I0730 00:39:05.957318  516753 fix.go:229] Guest: 2024-07-30 00:39:05.92880584 +0000 UTC Remote: 2024-07-30 00:39:05.846456904 +0000 UTC m=+157.216279571 (delta=82.348936ms)
	I0730 00:39:05.957337  516753 fix.go:200] guest clock delta is within tolerance: 82.348936ms
	I0730 00:39:05.957343  516753 start.go:83] releasing machines lock for "ha-161305-m03", held for 20.943120972s
	I0730 00:39:05.957361  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.957662  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:05.960319  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.960668  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.960697  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.963170  516753 out.go:177] * Found network options:
	I0730 00:39:05.964611  516753 out.go:177]   - NO_PROXY=192.168.39.80,192.168.39.126
	W0730 00:39:05.965865  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 00:39:05.965887  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:39:05.965904  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.966503  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.966712  516753 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:39:05.966827  516753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:39:05.966876  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	W0730 00:39:05.966903  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 00:39:05.966925  516753 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 00:39:05.967033  516753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:39:05.967059  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:39:05.969953  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970276  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.970306  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970354  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970577  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.970798  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.970852  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:05.970877  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:05.970971  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.971055  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:39:05.971141  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:05.971194  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:39:05.971342  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:39:05.971496  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:39:06.209106  516753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:39:06.215673  516753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:39:06.215743  516753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:39:06.232821  516753 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 00:39:06.232845  516753 start.go:495] detecting cgroup driver to use...
	I0730 00:39:06.232924  516753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:39:06.248818  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:39:06.262755  516753 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:39:06.262815  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:39:06.276401  516753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:39:06.290150  516753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:39:06.417763  516753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:39:06.559300  516753 docker.go:233] disabling docker service ...
	I0730 00:39:06.559399  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:39:06.578963  516753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:39:06.591263  516753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:39:06.722677  516753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:39:06.833582  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:39:06.847857  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:39:06.866197  516753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:39:06.866269  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.878077  516753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:39:06.878143  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.888444  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.898494  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.908498  516753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:39:06.918372  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.928530  516753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.945248  516753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:39:06.955740  516753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:39:06.965090  516753 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 00:39:06.965160  516753 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 00:39:06.978702  516753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:39:06.989889  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:39:07.105796  516753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:39:07.247139  516753 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:39:07.247236  516753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:39:07.251631  516753 start.go:563] Will wait 60s for crictl version
	I0730 00:39:07.251693  516753 ssh_runner.go:195] Run: which crictl
	I0730 00:39:07.255268  516753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:39:07.292292  516753 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:39:07.292369  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:39:07.320137  516753 ssh_runner.go:195] Run: crio --version
	I0730 00:39:07.351426  516753 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:39:07.352885  516753 out.go:177]   - env NO_PROXY=192.168.39.80
	I0730 00:39:07.354075  516753 out.go:177]   - env NO_PROXY=192.168.39.80,192.168.39.126
	I0730 00:39:07.355118  516753 main.go:141] libmachine: (ha-161305-m03) Calling .GetIP
	I0730 00:39:07.357961  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:07.358318  516753 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:39:07.358354  516753 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:39:07.358612  516753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:39:07.362574  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:39:07.374603  516753 mustload.go:65] Loading cluster: ha-161305
	I0730 00:39:07.374857  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:39:07.375118  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:39:07.375162  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:39:07.390803  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0730 00:39:07.391252  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:39:07.391810  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:39:07.391832  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:39:07.392172  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:39:07.392366  516753 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:39:07.394068  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:39:07.394385  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:39:07.394422  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:39:07.409550  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I0730 00:39:07.409931  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:39:07.410364  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:39:07.410389  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:39:07.410767  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:39:07.410999  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:39:07.411175  516753 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.23
	I0730 00:39:07.411188  516753 certs.go:194] generating shared ca certs ...
	I0730 00:39:07.411202  516753 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:39:07.411368  516753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:39:07.411409  516753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:39:07.411420  516753 certs.go:256] generating profile certs ...
	I0730 00:39:07.411491  516753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:39:07.411514  516753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed
	I0730 00:39:07.411528  516753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.23 192.168.39.254]
	I0730 00:39:07.498421  516753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed ...
	I0730 00:39:07.498457  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed: {Name:mka51ce7224e7be62982785ca0a5d827177c78bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:39:07.498659  516753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed ...
	I0730 00:39:07.498676  516753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed: {Name:mke31ca91f4cf5aa80f2d78bd811dd38219b955c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:39:07.498774  516753 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.dd5da9ed -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:39:07.498914  516753 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.dd5da9ed -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:39:07.499045  516753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:39:07.499063  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:39:07.499076  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:39:07.499091  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:39:07.499104  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:39:07.499118  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:39:07.499130  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:39:07.499144  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:39:07.499156  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:39:07.499205  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:39:07.499232  516753 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:39:07.499241  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:39:07.499260  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:39:07.499281  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:39:07.499301  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:39:07.499350  516753 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:39:07.499375  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:07.499387  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:39:07.499399  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:39:07.499433  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:39:07.502457  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:07.502869  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:39:07.502894  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:07.503074  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:39:07.503304  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:39:07.503452  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:39:07.503564  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:39:07.581193  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0730 00:39:07.586347  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0730 00:39:07.596873  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0730 00:39:07.600660  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0730 00:39:07.612128  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0730 00:39:07.616807  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0730 00:39:07.627060  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0730 00:39:07.631688  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0730 00:39:07.642957  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0730 00:39:07.646916  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0730 00:39:07.657049  516753 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0730 00:39:07.661782  516753 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0730 00:39:07.673347  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:39:07.700025  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:39:07.728378  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:39:07.756115  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:39:07.781783  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0730 00:39:07.805695  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 00:39:07.829008  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:39:07.852820  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:39:07.875139  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:39:07.898794  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:39:07.921496  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:39:07.946220  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0730 00:39:07.961805  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0730 00:39:07.977857  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0730 00:39:07.997661  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0730 00:39:08.014452  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0730 00:39:08.031804  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0730 00:39:08.047186  516753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0730 00:39:08.062466  516753 ssh_runner.go:195] Run: openssl version
	I0730 00:39:08.067840  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:39:08.078012  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:08.082724  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:08.082796  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:39:08.088185  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:39:08.098493  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:39:08.109250  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:39:08.113938  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:39:08.114000  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:39:08.119602  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:39:08.130088  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:39:08.141150  516753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:39:08.145107  516753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:39:08.145171  516753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:39:08.151000  516753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:39:08.161268  516753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:39:08.165143  516753 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 00:39:08.165221  516753 kubeadm.go:934] updating node {m03 192.168.39.23 8443 v1.30.3 crio true true} ...
	I0730 00:39:08.165330  516753 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:39:08.165363  516753 kube-vip.go:115] generating kube-vip config ...
	I0730 00:39:08.165408  516753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:39:08.182050  516753 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:39:08.182139  516753 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:39:08.182221  516753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:39:08.193035  516753 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0730 00:39:08.193101  516753 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0730 00:39:08.202908  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0730 00:39:08.202919  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0730 00:39:08.202941  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:39:08.202940  516753 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0730 00:39:08.202962  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:39:08.202963  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:39:08.203014  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0730 00:39:08.203028  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0730 00:39:08.220000  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0730 00:39:08.220045  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0730 00:39:08.220052  516753 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:39:08.220073  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0730 00:39:08.220090  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0730 00:39:08.220281  516753 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0730 00:39:08.251715  516753 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0730 00:39:08.251761  516753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0730 00:39:09.090942  516753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0730 00:39:09.100443  516753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0730 00:39:09.116571  516753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:39:09.132612  516753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:39:09.149682  516753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:39:09.153360  516753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 00:39:09.164571  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:39:09.283359  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:39:09.300550  516753 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:39:09.300931  516753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:39:09.300988  516753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:39:09.317396  516753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0730 00:39:09.318001  516753 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:39:09.318630  516753 main.go:141] libmachine: Using API Version  1
	I0730 00:39:09.318657  516753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:39:09.319044  516753 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:39:09.319266  516753 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:39:09.319465  516753 start.go:317] joinCluster: &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:39:09.319652  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0730 00:39:09.319681  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:39:09.322968  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:09.323447  516753 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:39:09.323489  516753 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:39:09.323654  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:39:09.323827  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:39:09.323939  516753 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:39:09.324058  516753 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:39:09.484200  516753 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:39:09.484264  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token th4l0z.ino3nmjzd3n2m912 --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443"
	I0730 00:39:32.818813  516753 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token th4l0z.ino3nmjzd3n2m912 --discovery-token-ca-cert-hash sha256:0571f4da9a06e338cd8d18be6864398ed9b58dcd1fbf76ed6f924e9e8ae75702 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-161305-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443": (23.334518779s)
	I0730 00:39:32.818856  516753 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0730 00:39:33.419606  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-161305-m03 minikube.k8s.io/updated_at=2024_07_30T00_39_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500 minikube.k8s.io/name=ha-161305 minikube.k8s.io/primary=false
	I0730 00:39:33.536762  516753 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-161305-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0730 00:39:33.632378  516753 start.go:319] duration metric: took 24.312908762s to joinCluster
	I0730 00:39:33.632491  516753 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 00:39:33.632858  516753 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:39:33.634081  516753 out.go:177] * Verifying Kubernetes components...
	I0730 00:39:33.635418  516753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:39:33.911969  516753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:39:33.930050  516753 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:39:33.930273  516753 kapi.go:59] client config for ha-161305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0730 00:39:33.930329  516753 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.80:8443
	I0730 00:39:33.930542  516753 node_ready.go:35] waiting up to 6m0s for node "ha-161305-m03" to be "Ready" ...
	I0730 00:39:33.930632  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:33.930641  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:33.930648  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:33.930652  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:33.934269  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:34.431783  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:34.431808  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:34.431819  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:34.431824  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:34.435252  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:34.931555  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:34.931579  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:34.931592  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:34.931599  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:34.935359  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:35.430967  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:35.430994  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:35.431009  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:35.431018  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:35.437104  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:35.930808  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:35.930831  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:35.930839  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:35.930844  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:35.933668  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:35.934539  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:36.431520  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:36.431551  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:36.431563  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:36.431570  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:36.435119  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:36.931515  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:36.931542  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:36.931551  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:36.931556  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:36.935366  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:37.431003  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:37.431024  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:37.431031  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:37.431037  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:37.434908  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:37.931483  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:37.931511  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:37.931523  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:37.931528  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:37.936330  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:37.936935  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:38.431257  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:38.431287  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:38.431296  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:38.431300  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:38.435020  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:38.930774  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:38.930798  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:38.930806  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:38.930809  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:38.934630  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:39.430899  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:39.430927  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:39.430939  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:39.430945  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:39.435151  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:39.931824  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:39.931858  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:39.931870  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:39.931876  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:39.935552  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:40.430822  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:40.430844  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:40.430852  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:40.430857  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:40.437458  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:40.438137  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:40.930996  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:40.931022  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:40.931040  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:40.931047  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:40.934641  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:41.431390  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:41.431414  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:41.431425  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:41.431431  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:41.436495  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:39:41.931643  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:41.931671  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:41.931680  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:41.931685  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:41.935175  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:42.431307  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:42.431332  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:42.431343  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:42.431349  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:42.434611  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:42.931405  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:42.931428  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:42.931437  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:42.931441  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:42.934995  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:42.935591  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:43.431646  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:43.431670  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:43.431678  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:43.431681  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:43.435512  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:43.931208  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:43.931237  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:43.931260  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:43.931268  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:43.934720  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:44.430980  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:44.431004  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:44.431012  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:44.431018  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:44.434486  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:44.931589  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:44.931617  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:44.931627  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:44.931633  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:44.935406  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:44.935958  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:45.430795  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:45.430818  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:45.430826  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:45.430831  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:45.434122  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:45.931158  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:45.931179  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:45.931187  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:45.931192  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:45.934698  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:46.430848  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:46.430872  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:46.430880  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:46.430884  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:46.434288  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:46.931375  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:46.931400  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:46.931408  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:46.931411  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:46.937416  516753 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 00:39:46.938108  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:47.431355  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:47.431378  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:47.431386  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:47.431390  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:47.434760  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:47.930736  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:47.930759  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:47.930768  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:47.930773  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:47.933842  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:48.431820  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:48.431850  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:48.431861  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:48.431867  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:48.435153  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:48.930802  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:48.930831  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:48.930842  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:48.930847  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:48.934475  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:49.431498  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:49.431525  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:49.431534  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:49.431538  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:49.435295  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:49.435926  516753 node_ready.go:53] node "ha-161305-m03" has status "Ready":"False"
	I0730 00:39:49.931361  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:49.931387  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:49.931397  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:49.931403  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:49.934677  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:50.431114  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:50.431139  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:50.431147  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:50.431151  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:50.434111  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:50.930946  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:50.930975  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:50.930985  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:50.930989  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:50.935229  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:51.431099  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:51.431141  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.431154  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.431160  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.434671  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:51.931707  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:51.931736  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.931745  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.931749  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.934803  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:51.935647  516753 node_ready.go:49] node "ha-161305-m03" has status "Ready":"True"
	I0730 00:39:51.935674  516753 node_ready.go:38] duration metric: took 18.005114813s for node "ha-161305-m03" to be "Ready" ...
	I0730 00:39:51.935686  516753 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:39:51.935773  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:51.935786  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.935796  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.935804  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.942634  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:51.949823  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.949915  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bdpds
	I0730 00:39:51.949923  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.949931  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.949935  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.953080  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:51.953646  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:51.953659  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.953666  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.953670  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.956092  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.956511  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.956527  516753 pod_ready.go:81] duration metric: took 6.677219ms for pod "coredns-7db6d8ff4d-bdpds" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.956536  516753 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.956583  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mzcln
	I0730 00:39:51.956590  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.956597  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.956603  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.958990  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.959533  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:51.959546  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.959555  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.959561  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.961627  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.962111  516753 pod_ready.go:92] pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.962131  516753 pod_ready.go:81] duration metric: took 5.587966ms for pod "coredns-7db6d8ff4d-mzcln" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.962152  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.962228  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305
	I0730 00:39:51.962237  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.962248  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.962255  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.964321  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.965030  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:51.965047  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.965058  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.965064  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.967502  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.967941  516753 pod_ready.go:92] pod "etcd-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.967965  516753 pod_ready.go:81] duration metric: took 5.793254ms for pod "etcd-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.967976  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.968044  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m02
	I0730 00:39:51.968056  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.968072  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.968079  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.970942  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.971929  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:51.971944  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:51.971952  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:51.971955  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:51.974306  516753 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 00:39:51.974863  516753 pod_ready.go:92] pod "etcd-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:51.974883  516753 pod_ready.go:81] duration metric: took 6.898155ms for pod "etcd-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:51.974896  516753 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.132180  516753 request.go:629] Waited for 157.209152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m03
	I0730 00:39:52.132248  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/etcd-ha-161305-m03
	I0730 00:39:52.132266  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.132276  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.132283  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.135623  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:52.332592  516753 request.go:629] Waited for 196.363071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:52.332672  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:52.332680  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.332691  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.332697  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.336136  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:52.336603  516753 pod_ready.go:92] pod "etcd-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:52.336625  516753 pod_ready.go:81] duration metric: took 361.718062ms for pod "etcd-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.336651  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.531710  516753 request.go:629] Waited for 194.967886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:39:52.531791  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305
	I0730 00:39:52.531802  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.531810  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.531818  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.535463  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:52.732736  516753 request.go:629] Waited for 196.392523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:52.732798  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:52.732803  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.732810  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.732814  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.740836  516753 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0730 00:39:52.741455  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:52.741489  516753 pod_ready.go:81] duration metric: took 404.824489ms for pod "kube-apiserver-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.741515  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:52.931759  516753 request.go:629] Waited for 190.119362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:39:52.931903  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m02
	I0730 00:39:52.931924  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:52.931934  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:52.931940  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:52.936086  516753 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 00:39:53.132667  516753 request.go:629] Waited for 195.771748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:53.132759  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:53.132770  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.132781  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.132788  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.136199  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.136656  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:53.136678  516753 pod_ready.go:81] duration metric: took 395.152635ms for pod "kube-apiserver-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.136691  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.331783  516753 request.go:629] Waited for 194.986103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m03
	I0730 00:39:53.331846  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-161305-m03
	I0730 00:39:53.331852  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.331859  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.331865  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.335697  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.532037  516753 request.go:629] Waited for 195.386948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:53.532143  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:53.532152  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.532165  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.532172  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.535532  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.536207  516753 pod_ready.go:92] pod "kube-apiserver-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:53.536227  516753 pod_ready.go:81] duration metric: took 399.528992ms for pod "kube-apiserver-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.536238  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.732653  516753 request.go:629] Waited for 196.316924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:39:53.732739  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305
	I0730 00:39:53.732745  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.732753  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.732757  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.736421  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.932572  516753 request.go:629] Waited for 194.928773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:53.932653  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:53.932663  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:53.932675  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:53.932683  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:53.935878  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:53.936437  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:53.936458  516753 pod_ready.go:81] duration metric: took 400.209865ms for pod "kube-controller-manager-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:53.936468  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.132530  516753 request.go:629] Waited for 195.97688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:39:54.132594  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m02
	I0730 00:39:54.132601  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.132610  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.132615  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.136152  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.332410  516753 request.go:629] Waited for 195.441902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:54.332485  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:54.332491  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.332501  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.332519  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.335629  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.336560  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:54.336582  516753 pod_ready.go:81] duration metric: took 400.107169ms for pod "kube-controller-manager-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.336592  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.532776  516753 request.go:629] Waited for 196.071018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m03
	I0730 00:39:54.532857  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-161305-m03
	I0730 00:39:54.532864  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.532872  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.532879  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.536395  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.732451  516753 request.go:629] Waited for 195.265957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:54.732547  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:54.732558  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.732568  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.732574  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.736178  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:54.736677  516753 pod_ready.go:92] pod "kube-controller-manager-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:54.736698  516753 pod_ready.go:81] duration metric: took 400.098829ms for pod "kube-controller-manager-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.736720  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:54.931801  516753 request.go:629] Waited for 194.994778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:39:54.931880  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqr2f
	I0730 00:39:54.931886  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:54.931908  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:54.931933  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:54.935261  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.132230  516753 request.go:629] Waited for 196.210898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:55.132325  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:55.132336  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.132348  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.132360  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.135845  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.136318  516753 pod_ready.go:92] pod "kube-proxy-pqr2f" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:55.136338  516753 pod_ready.go:81] duration metric: took 399.606227ms for pod "kube-proxy-pqr2f" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.136351  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v86sk" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.332508  516753 request.go:629] Waited for 196.05813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v86sk
	I0730 00:39:55.332590  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v86sk
	I0730 00:39:55.332601  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.332613  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.332623  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.336548  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.531742  516753 request.go:629] Waited for 194.290564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:55.531803  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:55.531816  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.531824  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.531828  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.534944  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.535498  516753 pod_ready.go:92] pod "kube-proxy-v86sk" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:55.535519  516753 pod_ready.go:81] duration metric: took 399.160843ms for pod "kube-proxy-v86sk" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.535529  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.732674  516753 request.go:629] Waited for 197.073515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:39:55.732761  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wptvn
	I0730 00:39:55.732770  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.732779  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.732783  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.736129  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.932185  516753 request.go:629] Waited for 195.390624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:55.932257  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:55.932263  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:55.932272  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:55.932279  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:55.935524  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:55.936142  516753 pod_ready.go:92] pod "kube-proxy-wptvn" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:55.936162  516753 pod_ready.go:81] duration metric: took 400.627207ms for pod "kube-proxy-wptvn" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:55.936172  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.132690  516753 request.go:629] Waited for 196.427303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:39:56.132793  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305
	I0730 00:39:56.132802  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.132810  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.132816  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.136203  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.332315  516753 request.go:629] Waited for 195.359193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:56.332390  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305
	I0730 00:39:56.332395  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.332403  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.332411  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.335886  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.336461  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:56.336480  516753 pod_ready.go:81] duration metric: took 400.30083ms for pod "kube-scheduler-ha-161305" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.336492  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.532626  516753 request.go:629] Waited for 196.035458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:39:56.532719  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m02
	I0730 00:39:56.532729  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.532741  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.532752  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.536219  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.732247  516753 request.go:629] Waited for 195.367062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:56.732315  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m02
	I0730 00:39:56.732322  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.732332  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.732338  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.735731  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:56.736483  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:56.736505  516753 pod_ready.go:81] duration metric: took 400.004617ms for pod "kube-scheduler-ha-161305-m02" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.736518  516753 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:56.932649  516753 request.go:629] Waited for 196.051111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m03
	I0730 00:39:56.932762  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-161305-m03
	I0730 00:39:56.932768  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:56.932777  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:56.932784  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:56.936237  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.132364  516753 request.go:629] Waited for 195.410488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:57.132438  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes/ha-161305-m03
	I0730 00:39:57.132443  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.132451  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.132457  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.135772  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.136435  516753 pod_ready.go:92] pod "kube-scheduler-ha-161305-m03" in "kube-system" namespace has status "Ready":"True"
	I0730 00:39:57.136456  516753 pod_ready.go:81] duration metric: took 399.929871ms for pod "kube-scheduler-ha-161305-m03" in "kube-system" namespace to be "Ready" ...
	I0730 00:39:57.136467  516753 pod_ready.go:38] duration metric: took 5.200768417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 00:39:57.136483  516753 api_server.go:52] waiting for apiserver process to appear ...
	I0730 00:39:57.136547  516753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:39:57.155174  516753 api_server.go:72] duration metric: took 23.522630913s to wait for apiserver process to appear ...
	I0730 00:39:57.155209  516753 api_server.go:88] waiting for apiserver healthz status ...
	I0730 00:39:57.155239  516753 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0730 00:39:57.163492  516753 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0730 00:39:57.163592  516753 round_trippers.go:463] GET https://192.168.39.80:8443/version
	I0730 00:39:57.163605  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.163618  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.163629  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.165067  516753 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0730 00:39:57.165269  516753 api_server.go:141] control plane version: v1.30.3
	I0730 00:39:57.165290  516753 api_server.go:131] duration metric: took 10.072767ms to wait for apiserver health ...
	I0730 00:39:57.165299  516753 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 00:39:57.332768  516753 request.go:629] Waited for 167.351583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.332841  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.332848  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.332856  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.332864  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.341731  516753 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0730 00:39:57.348866  516753 system_pods.go:59] 24 kube-system pods found
	I0730 00:39:57.348895  516753 system_pods.go:61] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:39:57.348901  516753 system_pods.go:61] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:39:57.348907  516753 system_pods.go:61] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:39:57.348910  516753 system_pods.go:61] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:39:57.348915  516753 system_pods.go:61] "etcd-ha-161305-m03" [4f9f6485-c2e1-4288-abd9-83dd8f742e9f] Running
	I0730 00:39:57.348920  516753 system_pods.go:61] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:39:57.348925  516753 system_pods.go:61] "kindnet-x7292" [10f89bb1-e8b3-4901-b924-59401555bebd] Running
	I0730 00:39:57.348929  516753 system_pods.go:61] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:39:57.348934  516753 system_pods.go:61] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:39:57.348939  516753 system_pods.go:61] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:39:57.348946  516753 system_pods.go:61] "kube-apiserver-ha-161305-m03" [9519b474-7a17-43b5-8ad0-78340215eea1] Running
	I0730 00:39:57.348956  516753 system_pods.go:61] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:39:57.348963  516753 system_pods.go:61] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:39:57.348968  516753 system_pods.go:61] "kube-controller-manager-ha-161305-m03" [89d7e90c-024c-498e-9f64-6ea95255e90e] Running
	I0730 00:39:57.348977  516753 system_pods.go:61] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:39:57.348982  516753 system_pods.go:61] "kube-proxy-v86sk" [daba82b2-fd20-4b41-bba0-e8927cb91f2e] Running
	I0730 00:39:57.348989  516753 system_pods.go:61] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:39:57.348997  516753 system_pods.go:61] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:39:57.349002  516753 system_pods.go:61] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:39:57.349009  516753 system_pods.go:61] "kube-scheduler-ha-161305-m03" [0df78a8a-e986-43a8-b8b5-0a2ce029b53b] Running
	I0730 00:39:57.349014  516753 system_pods.go:61] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:39:57.349025  516753 system_pods.go:61] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:39:57.349029  516753 system_pods.go:61] "kube-vip-ha-161305-m03" [9e075c09-55e3-4669-acb0-b53947d96691] Running
	I0730 00:39:57.349031  516753 system_pods.go:61] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:39:57.349038  516753 system_pods.go:74] duration metric: took 183.733644ms to wait for pod list to return data ...
	I0730 00:39:57.349050  516753 default_sa.go:34] waiting for default service account to be created ...
	I0730 00:39:57.531719  516753 request.go:629] Waited for 182.570496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:39:57.531787  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/default/serviceaccounts
	I0730 00:39:57.531794  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.531802  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.531806  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.535347  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.535514  516753 default_sa.go:45] found service account: "default"
	I0730 00:39:57.535534  516753 default_sa.go:55] duration metric: took 186.471929ms for default service account to be created ...
	I0730 00:39:57.535558  516753 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 00:39:57.731815  516753 request.go:629] Waited for 196.17079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.731891  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/namespaces/kube-system/pods
	I0730 00:39:57.731901  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.731913  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.731924  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.738135  516753 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 00:39:57.744635  516753 system_pods.go:86] 24 kube-system pods found
	I0730 00:39:57.744666  516753 system_pods.go:89] "coredns-7db6d8ff4d-bdpds" [7c1470c5-85f4-4dfa-84c0-14aa6c713e73] Running
	I0730 00:39:57.744674  516753 system_pods.go:89] "coredns-7db6d8ff4d-mzcln" [cab12f67-38e0-41f7-8414-120064dca1e6] Running
	I0730 00:39:57.744680  516753 system_pods.go:89] "etcd-ha-161305" [5c7dae60-3334-4bbb-90d0-96902a0e19ca] Running
	I0730 00:39:57.744686  516753 system_pods.go:89] "etcd-ha-161305-m02" [18952930-32a5-4b81-a67c-6324aee65eb8] Running
	I0730 00:39:57.744692  516753 system_pods.go:89] "etcd-ha-161305-m03" [4f9f6485-c2e1-4288-abd9-83dd8f742e9f] Running
	I0730 00:39:57.744699  516753 system_pods.go:89] "kindnet-dj7v2" [8d584855-119a-4df9-87d4-4c4fd59ec386] Running
	I0730 00:39:57.744717  516753 system_pods.go:89] "kindnet-x7292" [10f89bb1-e8b3-4901-b924-59401555bebd] Running
	I0730 00:39:57.744727  516753 system_pods.go:89] "kindnet-zrzxf" [3745faa8-044d-4923-8a49-c21a0332e208] Running
	I0730 00:39:57.744737  516753 system_pods.go:89] "kube-apiserver-ha-161305" [55b68f3e-7127-4a03-83d7-ea169937b7b7] Running
	I0730 00:39:57.744747  516753 system_pods.go:89] "kube-apiserver-ha-161305-m02" [834df1fc-4400-475f-b86e-7176f335f79b] Running
	I0730 00:39:57.744756  516753 system_pods.go:89] "kube-apiserver-ha-161305-m03" [9519b474-7a17-43b5-8ad0-78340215eea1] Running
	I0730 00:39:57.744764  516753 system_pods.go:89] "kube-controller-manager-ha-161305" [647f1107-c722-4d08-a32b-d53a24f212c9] Running
	I0730 00:39:57.744772  516753 system_pods.go:89] "kube-controller-manager-ha-161305-m02" [2b16c61d-99fe-4807-b362-2361e6d9ec03] Running
	I0730 00:39:57.744778  516753 system_pods.go:89] "kube-controller-manager-ha-161305-m03" [89d7e90c-024c-498e-9f64-6ea95255e90e] Running
	I0730 00:39:57.744784  516753 system_pods.go:89] "kube-proxy-pqr2f" [88c5dd9f-639f-4085-8a0f-064b53e870ea] Running
	I0730 00:39:57.744792  516753 system_pods.go:89] "kube-proxy-v86sk" [daba82b2-fd20-4b41-bba0-e8927cb91f2e] Running
	I0730 00:39:57.744801  516753 system_pods.go:89] "kube-proxy-wptvn" [1733d06b-6eb7-4dd5-9349-b727cc05e797] Running
	I0730 00:39:57.744810  516753 system_pods.go:89] "kube-scheduler-ha-161305" [c9ce0f0c-40b3-44ea-8c7d-f8b1d7af9e16] Running
	I0730 00:39:57.744819  516753 system_pods.go:89] "kube-scheduler-ha-161305-m02" [98fa3e7a-7ed2-44b7-a1be-7121ca4899b0] Running
	I0730 00:39:57.744827  516753 system_pods.go:89] "kube-scheduler-ha-161305-m03" [0df78a8a-e986-43a8-b8b5-0a2ce029b53b] Running
	I0730 00:39:57.744834  516753 system_pods.go:89] "kube-vip-ha-161305" [084d986e-4abd-4c66-aea9-5738f6a60ac5] Running
	I0730 00:39:57.744842  516753 system_pods.go:89] "kube-vip-ha-161305-m02" [6282069b-1ac8-44eb-910f-d658a28ae0f1] Running
	I0730 00:39:57.744849  516753 system_pods.go:89] "kube-vip-ha-161305-m03" [9e075c09-55e3-4669-acb0-b53947d96691] Running
	I0730 00:39:57.744858  516753 system_pods.go:89] "storage-provisioner" [75260b22-5ffc-4848-8c70-5b9cb3f010bf] Running
	I0730 00:39:57.744867  516753 system_pods.go:126] duration metric: took 209.300644ms to wait for k8s-apps to be running ...
	I0730 00:39:57.744880  516753 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 00:39:57.744934  516753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:39:57.760382  516753 system_svc.go:56] duration metric: took 15.49353ms WaitForService to wait for kubelet
	I0730 00:39:57.760461  516753 kubeadm.go:582] duration metric: took 24.127926379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:39:57.760490  516753 node_conditions.go:102] verifying NodePressure condition ...
	I0730 00:39:57.931819  516753 request.go:629] Waited for 171.253197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.80:8443/api/v1/nodes
	I0730 00:39:57.931881  516753 round_trippers.go:463] GET https://192.168.39.80:8443/api/v1/nodes
	I0730 00:39:57.931887  516753 round_trippers.go:469] Request Headers:
	I0730 00:39:57.931895  516753 round_trippers.go:473]     Accept: application/json, */*
	I0730 00:39:57.931899  516753 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0730 00:39:57.935672  516753 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 00:39:57.936915  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:39:57.936942  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:39:57.936958  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:39:57.936963  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:39:57.936969  516753 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 00:39:57.936974  516753 node_conditions.go:123] node cpu capacity is 2
	I0730 00:39:57.936980  516753 node_conditions.go:105] duration metric: took 176.483522ms to run NodePressure ...
	I0730 00:39:57.937021  516753 start.go:241] waiting for startup goroutines ...
	I0730 00:39:57.937052  516753 start.go:255] writing updated cluster config ...
	I0730 00:39:57.937365  516753 ssh_runner.go:195] Run: rm -f paused
	I0730 00:39:57.992477  516753 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0730 00:39:57.994477  516753 out.go:177] * Done! kubectl is now configured to use "ha-161305" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.808083244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300273808062753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08f3739b-c272-4b7e-8696-75050ed3a41f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.809757951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80030cc7-c79e-4ef4-a147-6a9f717b2dbb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.809827719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80030cc7-c79e-4ef4-a147-6a9f717b2dbb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.810185342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80030cc7-c79e-4ef4-a147-6a9f717b2dbb name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.846581043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8053468-edeb-46db-8f8d-c2ffe0e26f02 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.846665643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8053468-edeb-46db-8f8d-c2ffe0e26f02 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.847868144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c420433a-c8d0-487a-a675-9fe473dd43fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.848350229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300273848327353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c420433a-c8d0-487a-a675-9fe473dd43fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.848861569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fadd418-6660-430b-b2d9-e4e929d1219f name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.848923701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fadd418-6660-430b-b2d9-e4e929d1219f name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.849185005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fadd418-6660-430b-b2d9-e4e929d1219f name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.884874965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e286223-b97d-4c53-9dcf-81d75eb0786a name=/runtime.v1.RuntimeService/Version
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.885023132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e286223-b97d-4c53-9dcf-81d75eb0786a name=/runtime.v1.RuntimeService/Version
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.890355582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71e32693-48c5-4f78-9318-2f7dfaad1c51 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.890798910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300273890775287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71e32693-48c5-4f78-9318-2f7dfaad1c51 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.891387413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0ddf945-bfff-4c19-874c-9bac8aedb3f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.891451685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0ddf945-bfff-4c19-874c-9bac8aedb3f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.891720513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0ddf945-bfff-4c19-874c-9bac8aedb3f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.931806961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ea3df29-d287-4800-8f68-d7a74004b79c name=/runtime.v1.RuntimeService/Version
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.931910413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ea3df29-d287-4800-8f68-d7a74004b79c name=/runtime.v1.RuntimeService/Version
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.933128043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91a2175f-b6bb-4b03-8ef5-e89904598ed9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.933645033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300273933621282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91a2175f-b6bb-4b03-8ef5-e89904598ed9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.934102000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9e15e61-95d7-4fd5-8c62-5f5339af9fae name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.934166403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9e15e61-95d7-4fd5-8c62-5f5339af9fae name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:44:33 ha-161305 crio[686]: time="2024-07-30 00:44:33.934410275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300002299892483,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857592527052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722299857553279147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28,PodSandboxId:dc6671f8236d535fcc06ecc8b64532f9509420897f07f373f2dd01e515657966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722299857509877039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722299845777100045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172229984
1990828595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458,PodSandboxId:f2dde65522fc02bfbe2f105b665b84be9121bd505e32068b99282ac44be1a0e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222998247
22757860,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a3f8db9aaefccb9f983dc9e69993dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722299822323078041,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552,PodSandboxId:e8cce281b68018929fa41225cc7f3eb6c9dbacce5a852a94576ec2cb00b0ff5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722299822226679596,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722299822148810432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2,PodSandboxId:22c993ee1124526061090ce669c35d1aa444001554899fa2528adb94105cd632,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722299822115721860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9e15e61-95d7-4fd5-8c62-5f5339af9fae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33787e97a5dca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   1ce43d8d3ab67       busybox-fc5497c4f-ttjx8
	2b2f636edadaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   5d3af1b83b992       coredns-7db6d8ff4d-bdpds
	f6480acdda7d5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   fb1702cc41245       coredns-7db6d8ff4d-mzcln
	922c527ae0dbe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   dc6671f8236d5       storage-provisioner
	625a67c138c38       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   ceb9cb15a729f       kindnet-zrzxf
	1805553d07226       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   5821d52c1a1dd       kube-proxy-wptvn
	3d24c7873d038       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   f2dde65522fc0       kube-vip-ha-161305
	a2084c9181292       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   3f0cef29badb6       etcd-ha-161305
	0555b883473bf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   e8cce281b6801       kube-controller-manager-ha-161305
	16a5f7eb1118e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   cb4dface16b38       kube-scheduler-ha-161305
	c20fcb6fb9f2b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   22c993ee11245       kube-apiserver-ha-161305
	
	
	==> coredns [2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81] <==
	[INFO] 10.244.2.2:60483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003051093s
	[INFO] 10.244.0.4:59591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146746s
	[INFO] 10.244.0.4:40956 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850341s
	[INFO] 10.244.0.4:35576 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165468s
	[INFO] 10.244.0.4:58077 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012218s
	[INFO] 10.244.0.4:49078 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000206386s
	[INFO] 10.244.1.2:48352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113505s
	[INFO] 10.244.1.2:37780 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816793s
	[INFO] 10.244.1.2:33649 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128148s
	[INFO] 10.244.1.2:48051 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092471s
	[INFO] 10.244.1.2:36198 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007191s
	[INFO] 10.244.2.2:35489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018657s
	[INFO] 10.244.2.2:54354 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142599s
	[INFO] 10.244.2.2:58953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134101s
	[INFO] 10.244.2.2:60956 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078404s
	[INFO] 10.244.0.4:45817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115908s
	[INFO] 10.244.1.2:38448 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117252s
	[INFO] 10.244.1.2:37783 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087436s
	[INFO] 10.244.2.2:44186 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138301s
	[INFO] 10.244.0.4:42700 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074904s
	[INFO] 10.244.0.4:41284 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112024s
	[INFO] 10.244.0.4:39360 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000096229s
	[INFO] 10.244.1.2:35167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095182s
	[INFO] 10.244.1.2:37860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007318s
	[INFO] 10.244.1.2:40179 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076418s
	
	
	==> coredns [f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b] <==
	[INFO] 10.244.2.2:34155 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.007684984s
	[INFO] 10.244.0.4:48164 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.015844953s
	[INFO] 10.244.0.4:37925 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001742548s
	[INFO] 10.244.1.2:60200 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000507695s
	[INFO] 10.244.2.2:54293 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003671077s
	[INFO] 10.244.2.2:59859 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017939s
	[INFO] 10.244.2.2:41789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144993s
	[INFO] 10.244.2.2:46813 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143383s
	[INFO] 10.244.2.2:35590 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107787s
	[INFO] 10.244.0.4:40333 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147444s
	[INFO] 10.244.0.4:41070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094914s
	[INFO] 10.244.0.4:60015 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119517s
	[INFO] 10.244.1.2:41685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405792s
	[INFO] 10.244.1.2:48444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009825s
	[INFO] 10.244.1.2:38476 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107007s
	[INFO] 10.244.0.4:41768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098341s
	[INFO] 10.244.0.4:54976 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067321s
	[INFO] 10.244.0.4:60391 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053259s
	[INFO] 10.244.1.2:36807 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164322s
	[INFO] 10.244.1.2:38239 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011686s
	[INFO] 10.244.2.2:58831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129058s
	[INFO] 10.244.2.2:56804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134761s
	[INFO] 10.244.2.2:41613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109006s
	[INFO] 10.244.0.4:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155306s
	[INFO] 10.244.1.2:58876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114279s
	
	
	==> describe nodes <==
	Name:               ha-161305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_37_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:37:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:44:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:40:10 +0000   Tue, 30 Jul 2024 00:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-161305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee5b503318a04d5fa9f6151b095f43f6
	  System UUID:                ee5b5033-18a0-4d5f-a9f6-151b095f43f6
	  Boot ID:                    c41944eb-218c-41cb-bf89-ac90ba0a8709
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ttjx8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 coredns-7db6d8ff4d-bdpds             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m13s
	  kube-system                 coredns-7db6d8ff4d-mzcln             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m13s
	  kube-system                 etcd-ha-161305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m26s
	  kube-system                 kindnet-zrzxf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m13s
	  kube-system                 kube-apiserver-ha-161305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-controller-manager-ha-161305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-proxy-wptvn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-scheduler-ha-161305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-vip-ha-161305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m11s  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m33s  kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m26s  kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m26s  kubelet          Node ha-161305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m26s  kubelet          Node ha-161305 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m13s  node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal  NodeReady                6m58s  kubelet          Node ha-161305 status is now: NodeReady
	  Normal  RegisteredNode           5m57s  node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal  RegisteredNode           4m46s  node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	
	
	Name:               ha-161305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_38_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:38:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:41:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 30 Jul 2024 00:40:21 +0000   Tue, 30 Jul 2024 00:41:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-161305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a157fd7e5c14479d97024c5548311976
	  System UUID:                a157fd7e-5c14-479d-9702-4c5548311976
	  Boot ID:                    a3712653-a4cd-4869-89f3-eb00a1ea7923
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v2pq7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-161305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-dj7v2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m14s
	  kube-system                 kube-apiserver-ha-161305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-controller-manager-ha-161305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-proxy-pqr2f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-scheduler-ha-161305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-vip-ha-161305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s (x8 over 6m15s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s (x8 over 6m15s)  kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s (x7 over 6m15s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  NodeNotReady             2m38s                  node-controller  Node ha-161305-m02 status is now: NodeNotReady
	
	
	Name:               ha-161305-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_39_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:39:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:44:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:40:32 +0000   Tue, 30 Jul 2024 00:39:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-161305-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 879cbedc505f4ed1b9b3132464b6d69b
	  System UUID:                879cbedc-505f-4ed1-b9b3-132464b6d69b
	  Boot ID:                    c32c8962-f039-4ee5-9802-63544120ba8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k6rhx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-161305-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 kindnet-x7292                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m4s
	  kube-system                 kube-apiserver-ha-161305-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-controller-manager-ha-161305-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-v86sk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-scheduler-ha-161305-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-vip-ha-161305-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)  kubelet          Node ha-161305-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)  kubelet          Node ha-161305-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)  kubelet          Node ha-161305-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal  RegisteredNode           4m46s                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	
	
	Name:               ha-161305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_40_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:44:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:41:06 +0000   Tue, 30 Jul 2024 00:40:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-161305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16981c9b42447afa5527547ca393cc7
	  System UUID:                b16981c9-b424-47af-a552-7547ca393cc7
	  Boot ID:                    e58479dc-cbf7-4760-8235-442459f77a42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bdl2h       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m59s
	  kube-system                 kube-proxy-f9bfb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m59s (x2 over 3m59s)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s (x2 over 3m59s)  kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s (x2 over 3m59s)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal  NodeReady                3m39s                  kubelet          Node ha-161305-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul30 00:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050562] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036109] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.703075] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.709633] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.532343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.201013] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.060589] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060160] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.175750] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.105381] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.262727] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +3.969960] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[Jul30 00:37] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.953682] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.085875] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.685156] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.526010] kauditd_printk_skb: 38 callbacks suppressed
	[Jul30 00:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb] <==
	{"level":"warn","ts":"2024-07-30T00:44:33.984892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.027524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.12684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.212659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.215094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.219352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.222596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.225264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.226738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.234665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.240879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.246332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.250272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.254205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.265218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.277486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.285555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.289526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.293731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.299613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.308267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.314448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.327724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.344659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:44:34.346804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:44:34 up 8 min,  0 users,  load average: 0.20, 0.37, 0.25
	Linux ha-161305 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0] <==
	I0730 00:43:56.760083       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:44:06.764535       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:44:06.764583       1 main.go:299] handling current node
	I0730 00:44:06.764597       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:44:06.764602       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:44:06.764729       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:44:06.764746       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:44:06.764807       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:44:06.764822       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:44:16.765341       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:44:16.765537       1 main.go:299] handling current node
	I0730 00:44:16.765583       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:44:16.765602       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:44:16.765802       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:44:16.765825       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:44:16.765912       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:44:16.765930       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:44:26.758136       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:44:26.758177       1 main.go:299] handling current node
	I0730 00:44:26.758203       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:44:26.758208       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:44:26.758395       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:44:26.758414       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:44:26.758498       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:44:26.758505       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2] <==
	I0730 00:37:07.019277       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0730 00:37:07.025655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.80]
	I0730 00:37:07.026740       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 00:37:07.032606       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0730 00:37:07.224489       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0730 00:37:08.453762       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0730 00:37:08.481298       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0730 00:37:08.492607       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0730 00:37:21.438941       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0730 00:37:21.490268       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0730 00:40:03.588802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32978: use of closed network connection
	E0730 00:40:03.790532       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32994: use of closed network connection
	E0730 00:40:04.001361       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33010: use of closed network connection
	E0730 00:40:04.196288       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33032: use of closed network connection
	E0730 00:40:04.405598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33058: use of closed network connection
	E0730 00:40:04.585868       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33074: use of closed network connection
	E0730 00:40:04.756018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33096: use of closed network connection
	E0730 00:40:04.938605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33126: use of closed network connection
	E0730 00:40:05.127204       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33134: use of closed network connection
	E0730 00:40:05.432569       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33158: use of closed network connection
	E0730 00:40:05.605589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33174: use of closed network connection
	E0730 00:40:05.780589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33198: use of closed network connection
	E0730 00:40:05.955794       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33214: use of closed network connection
	E0730 00:40:06.149844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33240: use of closed network connection
	E0730 00:40:06.322780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33260: use of closed network connection
	
	
	==> kube-controller-manager [0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552] <==
	E0730 00:39:30.241549       1 certificate_controller.go:146] Sync csr-sszsg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-sszsg": the object has been modified; please apply your changes to the latest version and try again
	I0730 00:39:30.337893       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-161305-m03\" does not exist"
	I0730 00:39:30.363379       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-161305-m03" podCIDRs=["10.244.2.0/24"]
	I0730 00:39:31.464649       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305-m03"
	I0730 00:39:58.967624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.031261ms"
	I0730 00:39:59.092055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="123.63166ms"
	I0730 00:39:59.296514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="204.312045ms"
	I0730 00:39:59.388523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.947189ms"
	I0730 00:39:59.388810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.333µs"
	I0730 00:39:59.995122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.017µs"
	I0730 00:40:00.271524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.136µs"
	I0730 00:40:02.448890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.141841ms"
	I0730 00:40:02.449079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.945µs"
	I0730 00:40:02.509077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.976048ms"
	I0730 00:40:02.509246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.514µs"
	I0730 00:40:03.166305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.867922ms"
	I0730 00:40:03.167683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.291074ms"
	E0730 00:40:35.439627       1 certificate_controller.go:146] Sync csr-8tbmw failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-8tbmw": the object has been modified; please apply your changes to the latest version and try again
	I0730 00:40:35.709258       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-161305-m04\" does not exist"
	I0730 00:40:35.738280       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-161305-m04" podCIDRs=["10.244.3.0/24"]
	I0730 00:40:36.477101       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305-m04"
	I0730 00:40:55.364420       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-161305-m04"
	I0730 00:41:56.519512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-161305-m04"
	I0730 00:41:56.643688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.500371ms"
	I0730 00:41:56.644361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.618µs"
	
	
	==> kube-proxy [1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2] <==
	I0730 00:37:22.378727       1 server_linux.go:69] "Using iptables proxy"
	I0730 00:37:22.393672       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.80"]
	I0730 00:37:22.514114       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:37:22.514175       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:37:22.514197       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:37:22.517669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:37:22.518064       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:37:22.518099       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:37:22.522742       1 config.go:192] "Starting service config controller"
	I0730 00:37:22.523094       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:37:22.523149       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:37:22.523158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:37:22.524314       1 config.go:319] "Starting node config controller"
	I0730 00:37:22.524343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:37:22.624532       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:37:22.624613       1 shared_informer.go:320] Caches are synced for service config
	I0730 00:37:22.625083       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12] <==
	I0730 00:39:58.905622       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="aafc31d9-59ed-4484-9345-b2c760317016" pod="default/busybox-fc5497c4f-v2pq7" assumedNode="ha-161305-m02" currentNode="ha-161305-m03"
	E0730 00:39:58.919299       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v2pq7\": pod busybox-fc5497c4f-v2pq7 is already assigned to node \"ha-161305-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v2pq7" node="ha-161305-m03"
	E0730 00:39:58.920013       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aafc31d9-59ed-4484-9345-b2c760317016(default/busybox-fc5497c4f-v2pq7) was assumed on ha-161305-m03 but assigned to ha-161305-m02" pod="default/busybox-fc5497c4f-v2pq7"
	E0730 00:39:58.920111       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v2pq7\": pod busybox-fc5497c4f-v2pq7 is already assigned to node \"ha-161305-m02\"" pod="default/busybox-fc5497c4f-v2pq7"
	I0730 00:39:58.920183       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v2pq7" node="ha-161305-m02"
	E0730 00:39:58.969637       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ttjx8\": pod busybox-fc5497c4f-ttjx8 is already assigned to node \"ha-161305\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-ttjx8" node="ha-161305"
	E0730 00:39:58.969705       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 93297df5-25c9-4722-8f86-668316a3d005(default/busybox-fc5497c4f-ttjx8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-ttjx8"
	E0730 00:39:58.969726       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ttjx8\": pod busybox-fc5497c4f-ttjx8 is already assigned to node \"ha-161305\"" pod="default/busybox-fc5497c4f-ttjx8"
	I0730 00:39:58.969751       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-ttjx8" node="ha-161305"
	E0730 00:39:58.975773       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-k6rhx\": pod busybox-fc5497c4f-k6rhx is already assigned to node \"ha-161305-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-k6rhx" node="ha-161305-m03"
	E0730 00:39:58.979457       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1c8de485-2ea1-454d-9b0d-aec913ebd0f5(default/busybox-fc5497c4f-k6rhx) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-k6rhx"
	E0730 00:39:58.980146       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-k6rhx\": pod busybox-fc5497c4f-k6rhx is already assigned to node \"ha-161305-m03\"" pod="default/busybox-fc5497c4f-k6rhx"
	I0730 00:39:58.980251       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-k6rhx" node="ha-161305-m03"
	E0730 00:40:35.786316       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f9bfb\": pod kube-proxy-f9bfb is already assigned to node \"ha-161305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f9bfb" node="ha-161305-m04"
	E0730 00:40:35.786430       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c223dc56-cf6b-4421-9070-f9b94d291026(kube-system/kube-proxy-f9bfb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-f9bfb"
	E0730 00:40:35.786455       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f9bfb\": pod kube-proxy-f9bfb is already assigned to node \"ha-161305-m04\"" pod="kube-system/kube-proxy-f9bfb"
	I0730 00:40:35.786482       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-f9bfb" node="ha-161305-m04"
	E0730 00:40:35.793184       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qvmll\": pod kindnet-qvmll is already assigned to node \"ha-161305-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qvmll" node="ha-161305-m04"
	E0730 00:40:35.793336       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 319a869c-bad1-4daa-8ac7-72163167c412(kube-system/kindnet-qvmll) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qvmll"
	E0730 00:40:35.793357       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qvmll\": pod kindnet-qvmll is already assigned to node \"ha-161305-m04\"" pod="kube-system/kindnet-qvmll"
	I0730 00:40:35.793400       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qvmll" node="ha-161305-m04"
	E0730 00:40:35.912231       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pnx2t\": pod kindnet-pnx2t is already assigned to node \"ha-161305-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pnx2t" node="ha-161305-m04"
	E0730 00:40:35.913077       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e12ff04f-f80b-4c33-b030-f515f22d607d(kube-system/kindnet-pnx2t) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pnx2t"
	E0730 00:40:35.913227       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pnx2t\": pod kindnet-pnx2t is already assigned to node \"ha-161305-m04\"" pod="kube-system/kindnet-pnx2t"
	I0730 00:40:35.913334       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pnx2t" node="ha-161305-m04"
	
	
	==> kubelet <==
	Jul 30 00:40:08 ha-161305 kubelet[1372]: E0730 00:40:08.372120    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:40:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:40:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:40:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:40:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:41:08 ha-161305 kubelet[1372]: E0730 00:41:08.373439    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:41:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:41:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:41:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:41:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:42:08 ha-161305 kubelet[1372]: E0730 00:42:08.372435    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:42:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:42:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:42:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:42:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:43:08 ha-161305 kubelet[1372]: E0730 00:43:08.372347    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:43:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:43:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:43:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:43:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:44:08 ha-161305 kubelet[1372]: E0730 00:44:08.374643    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:44:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:44:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:44:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:44:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-161305 -n ha-161305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-161305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-161305 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-161305 -v=7 --alsologtostderr
E0730 00:46:10.080887  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-161305 -v=7 --alsologtostderr: exit status 82 (2m1.797259093s)

                                                
                                                
-- stdout --
	* Stopping node "ha-161305-m04"  ...
	* Stopping node "ha-161305-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:44:35.791137  522604 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:44:35.791284  522604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:35.791302  522604 out.go:304] Setting ErrFile to fd 2...
	I0730 00:44:35.791318  522604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:44:35.791527  522604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:44:35.791745  522604 out.go:298] Setting JSON to false
	I0730 00:44:35.791830  522604 mustload.go:65] Loading cluster: ha-161305
	I0730 00:44:35.792211  522604 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:44:35.792318  522604 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:44:35.792525  522604 mustload.go:65] Loading cluster: ha-161305
	I0730 00:44:35.792750  522604 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:44:35.792801  522604 stop.go:39] StopHost: ha-161305-m04
	I0730 00:44:35.793209  522604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:35.793271  522604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:35.810636  522604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0730 00:44:35.811190  522604 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:35.811861  522604 main.go:141] libmachine: Using API Version  1
	I0730 00:44:35.811891  522604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:35.812253  522604 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:35.814924  522604 out.go:177] * Stopping node "ha-161305-m04"  ...
	I0730 00:44:35.816190  522604 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0730 00:44:35.816252  522604 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:44:35.816527  522604 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0730 00:44:35.816560  522604 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:44:35.819702  522604 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:35.820158  522604 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:40:21 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:44:35.820187  522604 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:44:35.820305  522604 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:44:35.820509  522604 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:44:35.820682  522604 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:44:35.820871  522604 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:44:35.903656  522604 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0730 00:44:35.957376  522604 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0730 00:44:36.010246  522604 main.go:141] libmachine: Stopping "ha-161305-m04"...
	I0730 00:44:36.010281  522604 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:44:36.011786  522604 main.go:141] libmachine: (ha-161305-m04) Calling .Stop
	I0730 00:44:36.015629  522604 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 0/120
	I0730 00:44:37.115656  522604 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:44:37.117279  522604 main.go:141] libmachine: Machine "ha-161305-m04" was stopped.
	I0730 00:44:37.117300  522604 stop.go:75] duration metric: took 1.301115294s to stop
	I0730 00:44:37.117327  522604 stop.go:39] StopHost: ha-161305-m03
	I0730 00:44:37.117629  522604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:44:37.117680  522604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:44:37.133795  522604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0730 00:44:37.134259  522604 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:44:37.134888  522604 main.go:141] libmachine: Using API Version  1
	I0730 00:44:37.134920  522604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:44:37.135289  522604 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:44:37.137464  522604 out.go:177] * Stopping node "ha-161305-m03"  ...
	I0730 00:44:37.138784  522604 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0730 00:44:37.138811  522604 main.go:141] libmachine: (ha-161305-m03) Calling .DriverName
	I0730 00:44:37.139042  522604 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0730 00:44:37.139069  522604 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHHostname
	I0730 00:44:37.141841  522604 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:37.142402  522604 main.go:141] libmachine: (ha-161305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c4:d8", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:38:59 +0000 UTC Type:0 Mac:52:54:00:e7:c4:d8 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-161305-m03 Clientid:01:52:54:00:e7:c4:d8}
	I0730 00:44:37.142442  522604 main.go:141] libmachine: (ha-161305-m03) DBG | domain ha-161305-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:e7:c4:d8 in network mk-ha-161305
	I0730 00:44:37.142598  522604 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHPort
	I0730 00:44:37.142791  522604 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHKeyPath
	I0730 00:44:37.142937  522604 main.go:141] libmachine: (ha-161305-m03) Calling .GetSSHUsername
	I0730 00:44:37.143104  522604 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m03/id_rsa Username:docker}
	I0730 00:44:37.228215  522604 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0730 00:44:37.281151  522604 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0730 00:44:37.334301  522604 main.go:141] libmachine: Stopping "ha-161305-m03"...
	I0730 00:44:37.334337  522604 main.go:141] libmachine: (ha-161305-m03) Calling .GetState
	I0730 00:44:37.335893  522604 main.go:141] libmachine: (ha-161305-m03) Calling .Stop
	I0730 00:44:37.339236  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 0/120
	I0730 00:44:38.340483  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 1/120
	I0730 00:44:39.342056  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 2/120
	I0730 00:44:40.343759  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 3/120
	I0730 00:44:41.345847  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 4/120
	I0730 00:44:42.347406  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 5/120
	I0730 00:44:43.349516  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 6/120
	I0730 00:44:44.351057  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 7/120
	I0730 00:44:45.352516  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 8/120
	I0730 00:44:46.354092  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 9/120
	I0730 00:44:47.355883  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 10/120
	I0730 00:44:48.357692  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 11/120
	I0730 00:44:49.359156  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 12/120
	I0730 00:44:50.360802  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 13/120
	I0730 00:44:51.362347  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 14/120
	I0730 00:44:52.364472  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 15/120
	I0730 00:44:53.365873  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 16/120
	I0730 00:44:54.367365  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 17/120
	I0730 00:44:55.368659  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 18/120
	I0730 00:44:56.370083  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 19/120
	I0730 00:44:57.372317  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 20/120
	I0730 00:44:58.374177  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 21/120
	I0730 00:44:59.375650  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 22/120
	I0730 00:45:00.377416  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 23/120
	I0730 00:45:01.378918  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 24/120
	I0730 00:45:02.381276  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 25/120
	I0730 00:45:03.383191  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 26/120
	I0730 00:45:04.384675  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 27/120
	I0730 00:45:05.386678  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 28/120
	I0730 00:45:06.388235  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 29/120
	I0730 00:45:07.390536  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 30/120
	I0730 00:45:08.392569  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 31/120
	I0730 00:45:09.394911  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 32/120
	I0730 00:45:10.397373  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 33/120
	I0730 00:45:11.398529  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 34/120
	I0730 00:45:12.400234  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 35/120
	I0730 00:45:13.401667  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 36/120
	I0730 00:45:14.403109  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 37/120
	I0730 00:45:15.404359  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 38/120
	I0730 00:45:16.405849  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 39/120
	I0730 00:45:17.407843  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 40/120
	I0730 00:45:18.409296  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 41/120
	I0730 00:45:19.410636  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 42/120
	I0730 00:45:20.412232  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 43/120
	I0730 00:45:21.414113  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 44/120
	I0730 00:45:22.415979  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 45/120
	I0730 00:45:23.418376  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 46/120
	I0730 00:45:24.419728  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 47/120
	I0730 00:45:25.421105  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 48/120
	I0730 00:45:26.423131  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 49/120
	I0730 00:45:27.424975  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 50/120
	I0730 00:45:28.426522  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 51/120
	I0730 00:45:29.428346  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 52/120
	I0730 00:45:30.430355  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 53/120
	I0730 00:45:31.431684  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 54/120
	I0730 00:45:32.433742  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 55/120
	I0730 00:45:33.435136  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 56/120
	I0730 00:45:34.436719  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 57/120
	I0730 00:45:35.437923  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 58/120
	I0730 00:45:36.439420  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 59/120
	I0730 00:45:37.441311  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 60/120
	I0730 00:45:38.442646  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 61/120
	I0730 00:45:39.444039  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 62/120
	I0730 00:45:40.445616  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 63/120
	I0730 00:45:41.446992  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 64/120
	I0730 00:45:42.448780  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 65/120
	I0730 00:45:43.450374  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 66/120
	I0730 00:45:44.451723  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 67/120
	I0730 00:45:45.453190  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 68/120
	I0730 00:45:46.455272  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 69/120
	I0730 00:45:47.457130  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 70/120
	I0730 00:45:48.458737  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 71/120
	I0730 00:45:49.460093  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 72/120
	I0730 00:45:50.461534  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 73/120
	I0730 00:45:51.463288  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 74/120
	I0730 00:45:52.464809  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 75/120
	I0730 00:45:53.466088  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 76/120
	I0730 00:45:54.467498  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 77/120
	I0730 00:45:55.468995  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 78/120
	I0730 00:45:56.470469  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 79/120
	I0730 00:45:57.472539  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 80/120
	I0730 00:45:58.473868  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 81/120
	I0730 00:45:59.475363  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 82/120
	I0730 00:46:00.476905  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 83/120
	I0730 00:46:01.478815  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 84/120
	I0730 00:46:02.480776  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 85/120
	I0730 00:46:03.482325  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 86/120
	I0730 00:46:04.483661  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 87/120
	I0730 00:46:05.486167  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 88/120
	I0730 00:46:06.487576  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 89/120
	I0730 00:46:07.489325  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 90/120
	I0730 00:46:08.490623  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 91/120
	I0730 00:46:09.491992  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 92/120
	I0730 00:46:10.493449  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 93/120
	I0730 00:46:11.494786  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 94/120
	I0730 00:46:12.496550  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 95/120
	I0730 00:46:13.498019  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 96/120
	I0730 00:46:14.499618  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 97/120
	I0730 00:46:15.501332  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 98/120
	I0730 00:46:16.502763  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 99/120
	I0730 00:46:17.504595  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 100/120
	I0730 00:46:18.506056  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 101/120
	I0730 00:46:19.507536  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 102/120
	I0730 00:46:20.509090  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 103/120
	I0730 00:46:21.510470  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 104/120
	I0730 00:46:22.512198  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 105/120
	I0730 00:46:23.513622  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 106/120
	I0730 00:46:24.515093  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 107/120
	I0730 00:46:25.516791  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 108/120
	I0730 00:46:26.518267  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 109/120
	I0730 00:46:27.519617  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 110/120
	I0730 00:46:28.521097  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 111/120
	I0730 00:46:29.522695  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 112/120
	I0730 00:46:30.523997  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 113/120
	I0730 00:46:31.525521  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 114/120
	I0730 00:46:32.527506  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 115/120
	I0730 00:46:33.528899  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 116/120
	I0730 00:46:34.530289  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 117/120
	I0730 00:46:35.531921  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 118/120
	I0730 00:46:36.533329  522604 main.go:141] libmachine: (ha-161305-m03) Waiting for machine to stop 119/120
	I0730 00:46:37.534182  522604 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0730 00:46:37.534262  522604 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0730 00:46:37.536411  522604 out.go:177] 
	W0730 00:46:37.537897  522604 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0730 00:46:37.537921  522604 out.go:239] * 
	* 
	W0730 00:46:37.541210  522604 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0730 00:46:37.542714  522604 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-161305 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-161305 --wait=true -v=7 --alsologtostderr
E0730 00:46:37.767922  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:48:42.934751  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-161305 --wait=true -v=7 --alsologtostderr: (4m5.123676395s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-161305
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-161305 -n ha-161305
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-161305 logs -n 25: (1.764527461s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m04 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp testdata/cp-test.txt                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305:/home/docker/cp-test_ha-161305-m04_ha-161305.txt                       |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305 sudo cat                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305.txt                                 |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03:/home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m03 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-161305 node stop m02 -v=7                                                     | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-161305 node start m02 -v=7                                                    | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-161305 -v=7                                                           | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-161305 -v=7                                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-161305 --wait=true -v=7                                                    | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:46 UTC | 30 Jul 24 00:50 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-161305                                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:50 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:46:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:46:37.590312  523084 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:46:37.590475  523084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:46:37.590485  523084 out.go:304] Setting ErrFile to fd 2...
	I0730 00:46:37.590491  523084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:46:37.590681  523084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:46:37.591316  523084 out.go:298] Setting JSON to false
	I0730 00:46:37.592426  523084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8940,"bootTime":1722291458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:46:37.592488  523084 start.go:139] virtualization: kvm guest
	I0730 00:46:37.595766  523084 out.go:177] * [ha-161305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:46:37.597262  523084 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:46:37.597279  523084 notify.go:220] Checking for updates...
	I0730 00:46:37.599712  523084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:46:37.600963  523084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:46:37.602222  523084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:46:37.603543  523084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:46:37.604753  523084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:46:37.606568  523084 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:46:37.606731  523084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:46:37.607401  523084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:46:37.607491  523084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:46:37.622902  523084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0730 00:46:37.623409  523084 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:46:37.624003  523084 main.go:141] libmachine: Using API Version  1
	I0730 00:46:37.624027  523084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:46:37.624437  523084 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:46:37.624775  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:46:37.660457  523084 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 00:46:37.661860  523084 start.go:297] selected driver: kvm2
	I0730 00:46:37.661887  523084 start.go:901] validating driver "kvm2" against &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:46:37.662349  523084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:46:37.662657  523084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:46:37.662725  523084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:46:37.679193  523084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:46:37.679878  523084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:46:37.679945  523084 cni.go:84] Creating CNI manager for ""
	I0730 00:46:37.679956  523084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0730 00:46:37.680024  523084 start.go:340] cluster config:
	{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:46:37.680151  523084 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:46:37.682084  523084 out.go:177] * Starting "ha-161305" primary control-plane node in "ha-161305" cluster
	I0730 00:46:37.683444  523084 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:46:37.683490  523084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:46:37.683500  523084 cache.go:56] Caching tarball of preloaded images
	I0730 00:46:37.683576  523084 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:46:37.683586  523084 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:46:37.683701  523084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:46:37.683893  523084 start.go:360] acquireMachinesLock for ha-161305: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:46:37.683954  523084 start.go:364] duration metric: took 41.973µs to acquireMachinesLock for "ha-161305"
	I0730 00:46:37.683972  523084 start.go:96] Skipping create...Using existing machine configuration
	I0730 00:46:37.683985  523084 fix.go:54] fixHost starting: 
	I0730 00:46:37.684230  523084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:46:37.684261  523084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:46:37.698933  523084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0730 00:46:37.699395  523084 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:46:37.699933  523084 main.go:141] libmachine: Using API Version  1
	I0730 00:46:37.699956  523084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:46:37.700310  523084 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:46:37.700480  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:46:37.700600  523084 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:46:37.702277  523084 fix.go:112] recreateIfNeeded on ha-161305: state=Running err=<nil>
	W0730 00:46:37.702301  523084 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 00:46:37.704994  523084 out.go:177] * Updating the running kvm2 "ha-161305" VM ...
	I0730 00:46:37.706355  523084 machine.go:94] provisionDockerMachine start ...
	I0730 00:46:37.706380  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:46:37.706588  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:37.709124  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.709617  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:37.709647  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.709758  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:37.709948  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.710115  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.710241  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:37.710427  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:37.710632  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:37.710646  523084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 00:46:37.834233  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:46:37.834266  523084 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:46:37.834533  523084 buildroot.go:166] provisioning hostname "ha-161305"
	I0730 00:46:37.834559  523084 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:46:37.834781  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:37.837773  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.838226  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:37.838251  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.838495  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:37.838701  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.838866  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.839056  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:37.839226  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:37.839452  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:37.839473  523084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305 && echo "ha-161305" | sudo tee /etc/hostname
	I0730 00:46:37.967641  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:46:37.967682  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:37.972231  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.972564  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:37.972597  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.972814  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:37.973072  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.973250  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.973435  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:37.973605  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:37.973818  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:37.973844  523084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:46:38.089867  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:46:38.089905  523084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:46:38.089930  523084 buildroot.go:174] setting up certificates
	I0730 00:46:38.089939  523084 provision.go:84] configureAuth start
	I0730 00:46:38.089947  523084 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:46:38.090262  523084 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:46:38.092973  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.093384  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.093417  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.093596  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:38.096434  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.096818  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.096845  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.096993  523084 provision.go:143] copyHostCerts
	I0730 00:46:38.097031  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:46:38.097082  523084 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:46:38.097096  523084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:46:38.097161  523084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:46:38.097244  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:46:38.097262  523084 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:46:38.097269  523084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:46:38.097293  523084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:46:38.097338  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:46:38.097355  523084 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:46:38.097361  523084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:46:38.097382  523084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:46:38.097443  523084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305 san=[127.0.0.1 192.168.39.80 ha-161305 localhost minikube]
	I0730 00:46:38.242386  523084 provision.go:177] copyRemoteCerts
	I0730 00:46:38.242461  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:46:38.242495  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:38.245213  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.245557  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.245586  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.245747  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:38.245935  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:38.246136  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:38.246294  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:46:38.334655  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:46:38.334749  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0730 00:46:38.359794  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:46:38.359895  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:46:38.382530  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:46:38.382601  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:46:38.408356  523084 provision.go:87] duration metric: took 318.40262ms to configureAuth
	I0730 00:46:38.408391  523084 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:46:38.408655  523084 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:46:38.408761  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:38.411371  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.411701  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.411723  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.411920  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:38.412127  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:38.412362  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:38.412505  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:38.412686  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:38.412918  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:38.412944  523084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:48:09.211747  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:48:09.211787  523084 machine.go:97] duration metric: took 1m31.505412701s to provisionDockerMachine
	I0730 00:48:09.211807  523084 start.go:293] postStartSetup for "ha-161305" (driver="kvm2")
	I0730 00:48:09.211825  523084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:48:09.211878  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.212262  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:48:09.212295  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.215280  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.215672  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.215702  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.215812  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.215994  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.216174  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.216309  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:48:09.304489  523084 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:48:09.308599  523084 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:48:09.308641  523084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:48:09.308731  523084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:48:09.308814  523084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:48:09.308827  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:48:09.308910  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:48:09.317614  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:48:09.340221  523084 start.go:296] duration metric: took 128.39793ms for postStartSetup
	I0730 00:48:09.340270  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.340605  523084 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0730 00:48:09.340634  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.343109  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.343503  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.343530  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.343710  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.343915  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.344064  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.344220  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	W0730 00:48:09.431138  523084 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0730 00:48:09.431174  523084 fix.go:56] duration metric: took 1m31.747193892s for fixHost
	I0730 00:48:09.431212  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.433724  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.434081  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.434110  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.434264  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.434447  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.434621  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.434704  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.434834  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:48:09.435046  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:48:09.435059  523084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:48:09.545337  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722300489.507743472
	
	I0730 00:48:09.545359  523084 fix.go:216] guest clock: 1722300489.507743472
	I0730 00:48:09.545367  523084 fix.go:229] Guest: 2024-07-30 00:48:09.507743472 +0000 UTC Remote: 2024-07-30 00:48:09.431181664 +0000 UTC m=+91.877567347 (delta=76.561808ms)
	I0730 00:48:09.545386  523084 fix.go:200] guest clock delta is within tolerance: 76.561808ms
	I0730 00:48:09.545392  523084 start.go:83] releasing machines lock for "ha-161305", held for 1m31.861425818s
	I0730 00:48:09.545436  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.545676  523084 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:48:09.548265  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.548619  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.548643  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.548836  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.549379  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.549566  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.549664  523084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:48:09.549726  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.549756  523084 ssh_runner.go:195] Run: cat /version.json
	I0730 00:48:09.549783  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.552147  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.552465  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.552548  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.552570  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.552695  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.552849  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.552868  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.552870  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.553032  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.553065  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.553191  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.553179  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:48:09.553362  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.553508  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:48:09.665925  523084 ssh_runner.go:195] Run: systemctl --version
	I0730 00:48:09.671881  523084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:48:09.834017  523084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:48:09.847016  523084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:48:09.847092  523084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:48:09.855718  523084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 00:48:09.855753  523084 start.go:495] detecting cgroup driver to use...
	I0730 00:48:09.855836  523084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:48:09.871170  523084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:48:09.885574  523084 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:48:09.885645  523084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:48:09.899356  523084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:48:09.912895  523084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:48:10.058035  523084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:48:10.201818  523084 docker.go:233] disabling docker service ...
	I0730 00:48:10.201892  523084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:48:10.217976  523084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:48:10.231647  523084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:48:10.376508  523084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:48:10.521729  523084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:48:10.535726  523084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:48:10.553426  523084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:48:10.553495  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.563277  523084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:48:10.563353  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.573106  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.582679  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.592273  523084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:48:10.602125  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.611806  523084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.622038  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.631437  523084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:48:10.639954  523084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:48:10.648234  523084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:48:10.792016  523084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:48:19.388392  523084 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.596329469s)
	I0730 00:48:19.388426  523084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:48:19.388485  523084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:48:19.393268  523084 start.go:563] Will wait 60s for crictl version
	I0730 00:48:19.393340  523084 ssh_runner.go:195] Run: which crictl
	I0730 00:48:19.396948  523084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:48:19.437444  523084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:48:19.437556  523084 ssh_runner.go:195] Run: crio --version
	I0730 00:48:19.466451  523084 ssh_runner.go:195] Run: crio --version
	I0730 00:48:19.495176  523084 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:48:19.496455  523084 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:48:19.499397  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:19.499744  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:19.499773  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:19.499951  523084 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:48:19.504529  523084 kubeadm.go:883] updating cluster {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:48:19.504688  523084 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:48:19.504761  523084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:48:19.547027  523084 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:48:19.547049  523084 crio.go:433] Images already preloaded, skipping extraction
	I0730 00:48:19.547109  523084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:48:19.579733  523084 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:48:19.579757  523084 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:48:19.579767  523084 kubeadm.go:934] updating node { 192.168.39.80 8443 v1.30.3 crio true true} ...
	I0730 00:48:19.579877  523084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:48:19.579940  523084 ssh_runner.go:195] Run: crio config
	I0730 00:48:19.628868  523084 cni.go:84] Creating CNI manager for ""
	I0730 00:48:19.628887  523084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0730 00:48:19.628896  523084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:48:19.628918  523084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-161305 NodeName:ha-161305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:48:19.629149  523084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-161305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:48:19.629173  523084 kube-vip.go:115] generating kube-vip config ...
	I0730 00:48:19.629232  523084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:48:19.640609  523084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:48:19.640741  523084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:48:19.640802  523084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:48:19.650060  523084 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:48:19.650149  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0730 00:48:19.658991  523084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0730 00:48:19.674738  523084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:48:19.689881  523084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0730 00:48:19.705084  523084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:48:19.722523  523084 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:48:19.726227  523084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:48:19.869398  523084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:48:19.883718  523084 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.80
	I0730 00:48:19.883744  523084 certs.go:194] generating shared ca certs ...
	I0730 00:48:19.883770  523084 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:48:19.883969  523084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:48:19.884064  523084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:48:19.884092  523084 certs.go:256] generating profile certs ...
	I0730 00:48:19.884193  523084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:48:19.884234  523084 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208
	I0730 00:48:19.884256  523084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.23 192.168.39.254]
	I0730 00:48:20.095553  523084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208 ...
	I0730 00:48:20.095583  523084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208: {Name:mka5a7d713a84be5a244cfd9bca850e3421af976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:48:20.095751  523084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208 ...
	I0730 00:48:20.095766  523084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208: {Name:mk8bd8d9a97bc0f3d72fcacd0dc6794358fcd73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:48:20.095838  523084 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:48:20.096003  523084 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:48:20.096150  523084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:48:20.096167  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:48:20.096181  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:48:20.096198  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:48:20.096211  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:48:20.096223  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:48:20.096235  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:48:20.096252  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:48:20.096264  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:48:20.096312  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:48:20.096339  523084 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:48:20.096350  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:48:20.096374  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:48:20.096395  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:48:20.096417  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:48:20.096454  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:48:20.096480  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.096496  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.096508  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.097243  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:48:20.121509  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:48:20.144334  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:48:20.167901  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:48:20.191485  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0730 00:48:20.213496  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 00:48:20.235569  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:48:20.258616  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:48:20.281395  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:48:20.304276  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:48:20.326844  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:48:20.349171  523084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:48:20.365089  523084 ssh_runner.go:195] Run: openssl version
	I0730 00:48:20.370768  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:48:20.381669  523084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.385957  523084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.386020  523084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.391428  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:48:20.400319  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:48:20.410812  523084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.415046  523084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.415101  523084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.420315  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:48:20.429199  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:48:20.439417  523084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.443496  523084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.443552  523084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.448845  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:48:20.457967  523084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:48:20.462378  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 00:48:20.467557  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 00:48:20.472669  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 00:48:20.478358  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 00:48:20.483672  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 00:48:20.488624  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 00:48:20.493735  523084 kubeadm.go:392] StartCluster: {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:48:20.493885  523084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:48:20.493956  523084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:48:20.529201  523084 cri.go:89] found id: "febe530e8cd22403160bb777a5267b14031496dcfd51e5ea49161e00e10b9a02"
	I0730 00:48:20.529226  523084 cri.go:89] found id: "cd75198115c64db74cb8fb79c24b6c0ddb58caaa9bdbd571d858c68d1492e34b"
	I0730 00:48:20.529230  523084 cri.go:89] found id: "05222e14df442628b1f405e4a28c1aa205a2a26a2895a63719aa2d3d3caaa86e"
	I0730 00:48:20.529235  523084 cri.go:89] found id: "2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81"
	I0730 00:48:20.529239  523084 cri.go:89] found id: "f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b"
	I0730 00:48:20.529243  523084 cri.go:89] found id: "922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28"
	I0730 00:48:20.529248  523084 cri.go:89] found id: "625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0"
	I0730 00:48:20.529252  523084 cri.go:89] found id: "1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2"
	I0730 00:48:20.529255  523084 cri.go:89] found id: "3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458"
	I0730 00:48:20.529263  523084 cri.go:89] found id: "a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb"
	I0730 00:48:20.529282  523084 cri.go:89] found id: "0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552"
	I0730 00:48:20.529287  523084 cri.go:89] found id: "16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12"
	I0730 00:48:20.529291  523084 cri.go:89] found id: "c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2"
	I0730 00:48:20.529295  523084 cri.go:89] found id: ""
	I0730 00:48:20.529357  523084 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.405517380Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-ttjx8,Uid:93297df5-25c9-4722-8f86-668316a3d005,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300539485164386,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:39:58.942722073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-161305,Uid:a98cc2f4e3fa5d2b9b450a9e8e1bc531,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722300519988709856,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{kubernetes.io/config.hash: a98cc2f4e3fa5d2b9b450a9e8e1bc531,kubernetes.io/config.seen: 2024-07-30T00:48:19.683552430Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mzcln,Uid:cab12f67-38e0-41f7-8414-120064dca1e6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505887834556,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-30T00:37:37.015651967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&PodSandboxMetadata{Name:kube-proxy-wptvn,Uid:1733d06b-6eb7-4dd5-9349-b727cc05e797,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505875896863,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:37:21.462567707Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdpds,Uid:7c1470c5-85f4-4dfa-84c0-14aa6c713e73,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505845511496
,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:37:37.021071151Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&PodSandboxMetadata{Name:kindnet-zrzxf,Uid:3745faa8-044d-4923-8a49-c21a0332e208,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505844784017,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-
30T00:37:21.469110691Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-161305,Uid:1d18c18869abbb97793407467ebdef17,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505826145598,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d18c18869abbb97793407467ebdef17,kubernetes.io/config.seen: 2024-07-30T00:37:08.331020687Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-161305,Uid:e78fc87ed9d024ac0fe2effd95cda2d8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1722300505793392036,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.80:8443,kubernetes.io/config.hash: e78fc87ed9d024ac0fe2effd95cda2d8,kubernetes.io/config.seen: 2024-07-30T00:37:08.331011142Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:75260b22-5ffc-4848-8c70-5b9cb3f010bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505789045650,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-30T00:37:37.008702062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Met
adata:&PodSandboxMetadata{Name:etcd-ha-161305,Uid:dbd41dd340ce6d6e863fbe359a241ea1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505762096489,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.80:2379,kubernetes.io/config.hash: dbd41dd340ce6d6e863fbe359a241ea1,kubernetes.io/config.seen: 2024-07-30T00:37:08.331006218Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-161305,Uid:139678a0c09914387156e9653bed8a57,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722300505733568144,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.co
ntainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 139678a0c09914387156e9653bed8a57,kubernetes.io/config.seen: 2024-07-30T00:37:08.331019288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-ttjx8,Uid:93297df5-25c9-4722-8f86-668316a3d005,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722299999262405792,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:39:58.942722073Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdpds,Uid:7c1470c5-85f4-4dfa-84c0-14aa6c713e73,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722299857339646347,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:37:37.021071151Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mzcln,Uid:cab12f67-38e0-41f7-8414-120064dca1e6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722299857325424579,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:37:37.015651967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&PodSandboxMetadata{Name:kube-proxy-wptvn,Uid:1733d06b-6eb7-4dd5-9349-b727cc05e797,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722299841789082541,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:37:21.462567707Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&PodSandboxMetadata{Name:kindnet-zrzxf,Uid:3745faa8-044d-4923-8a49-c21a0332e208,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722299841778353462,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T00:37:21.469110691Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&PodSandboxMetadata{Name:etcd-ha-161305,Uid:dbd41dd340ce6d6e863fbe359a241ea1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722299821970446891,Labels:map[string]string{component: etcd,io.kubernetes.container.name: P
OD,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.80:2379,kubernetes.io/config.hash: dbd41dd340ce6d6e863fbe359a241ea1,kubernetes.io/config.seen: 2024-07-30T00:37:01.491843270Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-161305,Uid:1d18c18869abbb97793407467ebdef17,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722299821963486018,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d18c18869a
bbb97793407467ebdef17,kubernetes.io/config.seen: 2024-07-30T00:37:01.491841028Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f62c824c-6c49-411a-9bd3-f5b1f191d7a1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.406819180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90222bd7-7574-4ad3-961b-0249edc3695d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.406897071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90222bd7-7574-4ad3-961b-0249edc3695d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.407611563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90222bd7-7574-4ad3-961b-0249edc3695d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.433929187Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25ac2324-49ca-4b9e-88e7-5a7b34972c47 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.434106782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25ac2324-49ca-4b9e-88e7-5a7b34972c47 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.435287480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa2e1045-ce47-4926-a1fa-8ed59cd42d1c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.436276185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300643436219868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa2e1045-ce47-4926-a1fa-8ed59cd42d1c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.436949042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84db4b90-a657-44d4-9ba9-0e6540170044 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.437165460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84db4b90-a657-44d4-9ba9-0e6540170044 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.441300916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84db4b90-a657-44d4-9ba9-0e6540170044 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.485262375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba7ad446-1ff4-4cb6-bf01-9f86cc9f652a name=/runtime.v1.RuntimeService/Version
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.485340366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba7ad446-1ff4-4cb6-bf01-9f86cc9f652a name=/runtime.v1.RuntimeService/Version
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.486653333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14246351-9dfc-4ddf-b163-eaf71347ea26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.487390760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300643487361982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14246351-9dfc-4ddf-b163-eaf71347ea26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.488052683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fa3af02-0606-4d81-a3ef-b64f5bbcd675 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.488157325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fa3af02-0606-4d81-a3ef-b64f5bbcd675 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.488874689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fa3af02-0606-4d81-a3ef-b64f5bbcd675 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.530721052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f753f50-1bb9-4b51-ad1e-2efdf77ee781 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.530829791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f753f50-1bb9-4b51-ad1e-2efdf77ee781 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.532118341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f4ec465-ce34-487e-826b-9d02dcbe8701 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.532676187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300643532651470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f4ec465-ce34-487e-826b-9d02dcbe8701 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.533565350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb2eb7ef-5d66-489f-976b-79d3c3443ec5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.533639703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb2eb7ef-5d66-489f-976b-79d3c3443ec5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:50:43 ha-161305 crio[3748]: time="2024-07-30 00:50:43.534180616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb2eb7ef-5d66-489f-976b-79d3c3443ec5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	571f739c3ec7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       4                   0377dfc5f5117       storage-provisioner
	3034a674ef2bd       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   d09f7c2c32def       kube-controller-manager-ha-161305
	37637e74a1f33       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   45a56eb6f8ca1       busybox-fc5497c4f-ttjx8
	beb8a63139cdb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   e9c7b84c6c909       kube-apiserver-ha-161305
	dbeddb236c6c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   0377dfc5f5117       storage-provisioner
	eca65a5f97abc       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   f2cde2eb18016       kube-vip-ha-161305
	3794d8da6d031       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   62603cd489d83       kube-proxy-wptvn
	225f65c04aecc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a7a7848979d5d       coredns-7db6d8ff4d-mzcln
	a4940cda3f54a       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   3452972572a3b       kindnet-zrzxf
	e7edc1afdc01a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   14b01800078de       coredns-7db6d8ff4d-bdpds
	3ab677666e42b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   5937bdc3a20dc       kube-scheduler-ha-161305
	090db2af84793       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   9818b8693e1bc       etcd-ha-161305
	e11b91a20a338       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   e9c7b84c6c909       kube-apiserver-ha-161305
	3b13100aa8cf3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   d09f7c2c32def       kube-controller-manager-ha-161305
	33787e97a5dca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   1ce43d8d3ab67       busybox-fc5497c4f-ttjx8
	2b2f636edadaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   5d3af1b83b992       coredns-7db6d8ff4d-bdpds
	f6480acdda7d5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   fb1702cc41245       coredns-7db6d8ff4d-mzcln
	625a67c138c38       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   ceb9cb15a729f       kindnet-zrzxf
	1805553d07226       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   5821d52c1a1dd       kube-proxy-wptvn
	a2084c9181292       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   3f0cef29badb6       etcd-ha-161305
	16a5f7eb1118e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   cb4dface16b38       kube-scheduler-ha-161305
	
	
	==> coredns [225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2075708608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 00:48:38.336) (total time: 10441ms):
	Trace[2075708608]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer 10441ms (00:48:48.777)
	Trace[2075708608]: [10.4414144s] [10.4414144s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81] <==
	[INFO] 10.244.0.4:49078 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000206386s
	[INFO] 10.244.1.2:48352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113505s
	[INFO] 10.244.1.2:37780 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816793s
	[INFO] 10.244.1.2:33649 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128148s
	[INFO] 10.244.1.2:48051 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092471s
	[INFO] 10.244.1.2:36198 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007191s
	[INFO] 10.244.2.2:35489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018657s
	[INFO] 10.244.2.2:54354 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142599s
	[INFO] 10.244.2.2:58953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134101s
	[INFO] 10.244.2.2:60956 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078404s
	[INFO] 10.244.0.4:45817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115908s
	[INFO] 10.244.1.2:38448 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117252s
	[INFO] 10.244.1.2:37783 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087436s
	[INFO] 10.244.2.2:44186 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138301s
	[INFO] 10.244.0.4:42700 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074904s
	[INFO] 10.244.0.4:41284 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112024s
	[INFO] 10.244.0.4:39360 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000096229s
	[INFO] 10.244.1.2:35167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095182s
	[INFO] 10.244.1.2:37860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007318s
	[INFO] 10.244.1.2:40179 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076418s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b] <==
	Trace[363621454]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:48:41.236)
	Trace[363621454]: [10.001813929s] [10.001813929s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[809198444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 00:48:38.223) (total time: 13405ms):
	Trace[809198444]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer 13405ms (00:48:51.628)
	Trace[809198444]: [13.405604121s] [13.405604121s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47108->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47108->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b] <==
	[INFO] 10.244.2.2:59859 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017939s
	[INFO] 10.244.2.2:41789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144993s
	[INFO] 10.244.2.2:46813 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143383s
	[INFO] 10.244.2.2:35590 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107787s
	[INFO] 10.244.0.4:40333 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147444s
	[INFO] 10.244.0.4:41070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094914s
	[INFO] 10.244.0.4:60015 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119517s
	[INFO] 10.244.1.2:41685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405792s
	[INFO] 10.244.1.2:48444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009825s
	[INFO] 10.244.1.2:38476 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107007s
	[INFO] 10.244.0.4:41768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098341s
	[INFO] 10.244.0.4:54976 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067321s
	[INFO] 10.244.0.4:60391 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053259s
	[INFO] 10.244.1.2:36807 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164322s
	[INFO] 10.244.1.2:38239 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011686s
	[INFO] 10.244.2.2:58831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129058s
	[INFO] 10.244.2.2:56804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134761s
	[INFO] 10.244.2.2:41613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109006s
	[INFO] 10.244.0.4:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155306s
	[INFO] 10.244.1.2:58876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114279s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-161305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_37_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:37:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:50:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-161305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee5b503318a04d5fa9f6151b095f43f6
	  System UUID:                ee5b5033-18a0-4d5f-a9f6-151b095f43f6
	  Boot ID:                    c41944eb-218c-41cb-bf89-ac90ba0a8709
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ttjx8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-bdpds             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-mzcln             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-161305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-zrzxf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-161305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-161305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-wptvn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-161305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-161305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 94s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-161305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-161305 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-161305 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Warning  ContainerGCFailed        2m36s (x2 over 3m36s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           86s                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           84s                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	
	
	Name:               ha-161305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_38_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:38:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:50:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-161305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a157fd7e5c14479d97024c5548311976
	  System UUID:                a157fd7e-5c14-479d-9702-4c5548311976
	  Boot ID:                    4f645d45-ff44-451d-986c-85a804baaea9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v2pq7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-161305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-dj7v2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-161305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-161305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-pqr2f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-161305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-161305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  NodeNotReady             8m48s                node-controller  Node ha-161305-m02 status is now: NodeNotReady
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m2s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m2s)  kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m2s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           86s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           84s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           27s                  node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	
	
	Name:               ha-161305-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_39_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:39:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:50:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:50:20 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:50:20 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:50:20 +0000   Tue, 30 Jul 2024 00:39:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:50:20 +0000   Tue, 30 Jul 2024 00:39:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-161305-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 879cbedc505f4ed1b9b3132464b6d69b
	  System UUID:                879cbedc-505f-4ed1-b9b3-132464b6d69b
	  Boot ID:                    15d841b9-080c-408c-90a2-62a7ea2b5a36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k6rhx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-161305-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-x7292                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-161305-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-161305-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-v86sk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-161305-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-161305-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 37s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-161305-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-161305-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-161305-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal   RegisteredNode           86s                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal   RegisteredNode           84s                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  55s                kubelet          Node ha-161305-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s                kubelet          Node ha-161305-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s                kubelet          Node ha-161305-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 55s                kubelet          Node ha-161305-m03 has been rebooted, boot id: 15d841b9-080c-408c-90a2-62a7ea2b5a36
	  Normal   RegisteredNode           27s                node-controller  Node ha-161305-m03 event: Registered Node ha-161305-m03 in Controller
	
	
	Name:               ha-161305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_40_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:50:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:50:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:50:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:50:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:50:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-161305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16981c9b42447afa5527547ca393cc7
	  System UUID:                b16981c9-b424-47af-a552-7547ca393cc7
	  Boot ID:                    cd17f5a2-30ac-44ae-8c6d-bf637a282fdf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bdl2h       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-f9bfb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   NodeReady                9m49s              kubelet          Node ha-161305-m04 status is now: NodeReady
	  Normal   RegisteredNode           86s                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           84s                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   NodeNotReady             46s                node-controller  Node ha-161305-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-161305-m04 has been rebooted, boot id: cd17f5a2-30ac-44ae-8c6d-bf637a282fdf
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-161305-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.201013] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.060589] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060160] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.175750] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.105381] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.262727] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +3.969960] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[Jul30 00:37] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.953682] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.085875] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.685156] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.526010] kauditd_printk_skb: 38 callbacks suppressed
	[Jul30 00:38] kauditd_printk_skb: 26 callbacks suppressed
	[Jul30 00:48] systemd-fstab-generator[3667]: Ignoring "noauto" option for root device
	[  +0.145039] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.168770] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	[  +0.148265] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.269338] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +9.072086] systemd-fstab-generator[3835]: Ignoring "noauto" option for root device
	[  +0.089847] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.952407] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.176537] kauditd_printk_skb: 97 callbacks suppressed
	[ +28.601116] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd] <==
	{"level":"warn","ts":"2024-07-30T00:49:43.778513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d33e7f1dba1e46ae","from":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-30T00:49:43.944911Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:43.945017Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:47.33148Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:47.331507Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:47.947218Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:47.947291Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:51.949657Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:51.949717Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:52.332035Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:52.332092Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-30T00:49:53.150315Z","caller":"traceutil/trace.go:171","msg":"trace[566036494] transaction","detail":"{read_only:false; response_revision:2284; number_of_response:1; }","duration":"108.435273ms","start":"2024-07-30T00:49:53.041807Z","end":"2024-07-30T00:49:53.150243Z","steps":["trace[566036494] 'process raft request'  (duration: 108.336693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T00:49:55.951634Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:55.951721Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f9852bfb3a2ffd8d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:57.332556Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:49:57.332644Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-30T00:49:59.580237Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d33e7f1dba1e46ae","to":"f9852bfb3a2ffd8d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-30T00:49:59.580386Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:49:59.580467Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:49:59.581544Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:49:59.583784Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:49:59.584493Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d33e7f1dba1e46ae","to":"f9852bfb3a2ffd8d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-30T00:49:59.584598Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:02.333235Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:50:02.333245Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	
	
	==> etcd [a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb] <==
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-30T00:46:38.599404Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T00:46:38.599708Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-30T00:46:38.601049Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d33e7f1dba1e46ae","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-30T00:46:38.601257Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.60127Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601293Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601377Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601404Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601434Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.60145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601455Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601463Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.60148Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601533Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601558Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601581Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.60159Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.604497Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-07-30T00:46:38.604707Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-07-30T00:46:38.604766Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-161305","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"]}
	
	
	==> kernel <==
	 00:50:44 up 14 min,  0 users,  load average: 0.22, 0.42, 0.34
	Linux ha-161305 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0] <==
	I0730 00:46:16.760501       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:46:16.760507       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:46:16.760701       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:46:16.760721       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:46:16.760793       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:46:16.760810       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:46:26.757441       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:46:26.757491       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:46:26.757650       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:46:26.757670       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:46:26.757722       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:46:26.757742       1 main.go:299] handling current node
	I0730 00:46:26.757764       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:46:26.757769       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	E0730 00:46:26.860545       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1844&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	W0730 00:46:29.932396       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	E0730 00:46:29.932462       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	I0730 00:46:36.757464       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:46:36.757539       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:46:36.757686       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:46:36.757705       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:46:36.757760       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:46:36.757775       1 main.go:299] handling current node
	I0730 00:46:36.757791       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:46:36.757795       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5] <==
	I0730 00:50:07.688824       1 main.go:299] handling current node
	I0730 00:50:17.688294       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:50:17.688550       1 main.go:299] handling current node
	I0730 00:50:17.688630       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:50:17.688694       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:50:17.688906       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:50:17.688944       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:50:17.689113       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:50:17.689154       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:50:27.688031       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:50:27.688235       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:50:27.688575       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:50:27.688632       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:50:27.688728       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:50:27.688756       1 main.go:299] handling current node
	I0730 00:50:27.688837       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:50:27.688864       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:50:37.688137       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:50:37.688235       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:50:37.688446       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:50:37.688507       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:50:37.688592       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:50:37.688613       1 main.go:299] handling current node
	I0730 00:50:37.688642       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:50:37.688659       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be] <==
	I0730 00:49:04.188859       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0730 00:49:04.169081       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0730 00:49:04.269632       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 00:49:04.279894       1 aggregator.go:165] initial CRD sync complete...
	I0730 00:49:04.280340       1 autoregister_controller.go:141] Starting autoregister controller
	I0730 00:49:04.280384       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 00:49:04.331260       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 00:49:04.335774       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 00:49:04.335864       1 policy_source.go:224] refreshing policies
	I0730 00:49:04.367456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 00:49:04.367554       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 00:49:04.368695       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 00:49:04.369326       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 00:49:04.369384       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 00:49:04.369340       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 00:49:04.374804       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0730 00:49:04.382496       1 cache.go:39] Caches are synced for autoregister controller
	W0730 00:49:04.387719       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23]
	I0730 00:49:04.389218       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 00:49:04.392436       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0730 00:49:04.403676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0730 00:49:04.409228       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0730 00:49:05.174481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0730 00:49:05.530471       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.80]
	W0730 00:49:25.533586       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.126 192.168.39.80]
	
	
	==> kube-apiserver [e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b] <==
	I0730 00:48:26.980191       1 options.go:221] external host was not specified, using 192.168.39.80
	I0730 00:48:26.981314       1 server.go:148] Version: v1.30.3
	I0730 00:48:26.982255       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:48:27.768723       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0730 00:48:27.770076       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 00:48:27.771859       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0730 00:48:27.771924       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0730 00:48:27.772133       1 instance.go:299] Using reconciler: lease
	W0730 00:48:47.766575       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0730 00:48:47.766765       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0730 00:48:47.773013       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0730 00:48:47.773016       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7] <==
	I0730 00:49:20.092844       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0730 00:49:20.195525       1 shared_informer.go:320] Caches are synced for deployment
	I0730 00:49:20.196481       1 shared_informer.go:320] Caches are synced for disruption
	I0730 00:49:20.248799       1 shared_informer.go:320] Caches are synced for taint
	I0730 00:49:20.249005       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0730 00:49:20.249127       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305"
	I0730 00:49:20.249177       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305-m02"
	I0730 00:49:20.249214       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305-m03"
	I0730 00:49:20.249245       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-161305-m04"
	I0730 00:49:20.249491       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0730 00:49:20.253032       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 00:49:20.278147       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 00:49:20.303325       1 shared_informer.go:320] Caches are synced for daemon sets
	I0730 00:49:20.707866       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 00:49:20.740295       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 00:49:20.740334       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0730 00:49:27.370401       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-hbt4n EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-hbt4n\": the object has been modified; please apply your changes to the latest version and try again"
	I0730 00:49:27.370822       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"ca6eeec0-8d49-4db9-a07a-6a1f37ce2d17", APIVersion:"v1", ResourceVersion:"259", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-hbt4n EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-hbt4n": the object has been modified; please apply your changes to the latest version and try again
	I0730 00:49:27.389874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.219977ms"
	I0730 00:49:27.390243       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="142.318µs"
	I0730 00:49:50.612347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.548125ms"
	I0730 00:49:50.612448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.746µs"
	I0730 00:50:08.907440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.994417ms"
	I0730 00:50:08.907568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.558µs"
	I0730 00:50:35.292404       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-161305-m04"
	
	
	==> kube-controller-manager [3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55] <==
	I0730 00:48:27.422433       1 serving.go:380] Generated self-signed cert in-memory
	I0730 00:48:28.228723       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0730 00:48:28.228760       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:48:28.230597       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0730 00:48:28.230751       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0730 00:48:28.232302       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0730 00:48:28.232375       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0730 00:48:48.777749       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.80:8443/healthz\": dial tcp 192.168.39.80:8443: connect: connection refused"
	
	
	==> kube-proxy [1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2] <==
	E0730 00:45:19.853076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:22.924319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:22.924401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:22.924471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:22.924508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:22.924319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:22.924579       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:29.516449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:29.516527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:29.516449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:29.516562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:29.516761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:29.516844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:38.733529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:38.733626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:38.733640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:38.733674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:41.805517       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:41.805621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:54.093269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:54.093333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:46:06.381025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:46:06.381106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:46:09.453358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:46:09.454004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f] <==
	I0730 00:48:28.260039       1 server_linux.go:69] "Using iptables proxy"
	E0730 00:48:30.765334       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:33.837250       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:36.908953       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:43.052445       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:52.268874       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0730 00:49:09.781706       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.80"]
	I0730 00:49:09.813651       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:49:09.813756       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:49:09.813786       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:49:09.816188       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:49:09.816436       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:49:09.816460       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:49:09.817946       1 config.go:192] "Starting service config controller"
	I0730 00:49:09.818049       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:49:09.818118       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:49:09.818137       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:49:09.818902       1 config.go:319] "Starting node config controller"
	I0730 00:49:09.818937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:49:09.918952       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 00:49:09.919067       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:49:09.919080       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12] <==
	W0730 00:46:33.445134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:33.445176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 00:46:34.101706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 00:46:34.101839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 00:46:34.177268       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:46:34.177315       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:46:34.631295       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 00:46:34.631393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 00:46:35.130472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0730 00:46:35.130517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0730 00:46:35.290905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 00:46:35.290950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0730 00:46:35.410283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:46:35.410388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:46:35.787271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:35.787319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0730 00:46:35.859667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 00:46:35.859806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 00:46:35.874789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:46:35.874877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:46:36.056161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:36.056210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 00:46:36.183016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:36.183058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:38.518229       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e] <==
	W0730 00:48:56.871568       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.80:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:56.871699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.80:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:56.909449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.80:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:56.909551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.80:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.034082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.034195       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.221307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.221364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.516154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.516202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.783858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.784029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.858869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.80:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.859101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.80:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.948932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.80:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.949834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.80:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:49:04.195620       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:49:04.196204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:49:04.197280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:49:04.197343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:49:04.197477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:49:04.197511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:49:04.197564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 00:49:04.197591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0730 00:49:11.292412       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 00:49:07 ha-161305 kubelet[1372]: E0730 00:49:07.628508    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-161305\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 30 00:49:07 ha-161305 kubelet[1372]: E0730 00:49:07.628829    1372 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-161305.17e6d6fba7150982  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-161305,UID:e78fc87ed9d024ac0fe2effd95cda2d8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-161305,},FirstTimestamp:2024-07-30 00:44:43.84410253 +0000 UTC m=+455.616851882,LastTimestamp:2024-07-30 00:44:43.84410253 +0000 UTC m=+455.616851882,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:n
il,ReportingController:kubelet,ReportingInstance:ha-161305,}"
	Jul 30 00:49:07 ha-161305 kubelet[1372]: I0730 00:49:07.629048    1372 status_manager.go:853] "Failed to get status for pod" podUID="75260b22-5ffc-4848-8c70-5b9cb3f010bf" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 30 00:49:08 ha-161305 kubelet[1372]: I0730 00:49:08.391587    1372 scope.go:117] "RemoveContainer" containerID="dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d"
	Jul 30 00:49:08 ha-161305 kubelet[1372]: E0730 00:49:08.392609    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75260b22-5ffc-4848-8c70-5b9cb3f010bf)\"" pod="kube-system/storage-provisioner" podUID="75260b22-5ffc-4848-8c70-5b9cb3f010bf"
	Jul 30 00:49:08 ha-161305 kubelet[1372]: I0730 00:49:08.393087    1372 scope.go:117] "RemoveContainer" containerID="3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55"
	Jul 30 00:49:08 ha-161305 kubelet[1372]: E0730 00:49:08.395231    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:49:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:49:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:49:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:49:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:49:14 ha-161305 kubelet[1372]: I0730 00:49:14.635315    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-ttjx8" podStartSLOduration=553.819802507 podStartE2EDuration="9m16.635284485s" podCreationTimestamp="2024-07-30 00:39:58 +0000 UTC" firstStartedPulling="2024-07-30 00:39:59.466224547 +0000 UTC m=+171.238973886" lastFinishedPulling="2024-07-30 00:40:02.281706516 +0000 UTC m=+174.054455864" observedRunningTime="2024-07-30 00:40:03.131282333 +0000 UTC m=+174.904031688" watchObservedRunningTime="2024-07-30 00:49:14.635284485 +0000 UTC m=+726.408033841"
	Jul 30 00:49:20 ha-161305 kubelet[1372]: I0730 00:49:20.356492    1372 scope.go:117] "RemoveContainer" containerID="dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d"
	Jul 30 00:49:20 ha-161305 kubelet[1372]: E0730 00:49:20.357596    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75260b22-5ffc-4848-8c70-5b9cb3f010bf)\"" pod="kube-system/storage-provisioner" podUID="75260b22-5ffc-4848-8c70-5b9cb3f010bf"
	Jul 30 00:49:33 ha-161305 kubelet[1372]: I0730 00:49:33.353919    1372 scope.go:117] "RemoveContainer" containerID="dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d"
	Jul 30 00:49:33 ha-161305 kubelet[1372]: E0730 00:49:33.354728    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75260b22-5ffc-4848-8c70-5b9cb3f010bf)\"" pod="kube-system/storage-provisioner" podUID="75260b22-5ffc-4848-8c70-5b9cb3f010bf"
	Jul 30 00:49:47 ha-161305 kubelet[1372]: I0730 00:49:47.354353    1372 scope.go:117] "RemoveContainer" containerID="dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d"
	Jul 30 00:50:04 ha-161305 kubelet[1372]: I0730 00:50:04.355047    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-161305" podUID="084d986e-4abd-4c66-aea9-5738f6a60ac5"
	Jul 30 00:50:04 ha-161305 kubelet[1372]: I0730 00:50:04.379118    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-161305"
	Jul 30 00:50:08 ha-161305 kubelet[1372]: E0730 00:50:08.373912    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:50:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:50:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:50:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:50:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:50:08 ha-161305 kubelet[1372]: I0730 00:50:08.377762    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-161305" podStartSLOduration=4.377727113 podStartE2EDuration="4.377727113s" podCreationTimestamp="2024-07-30 00:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-30 00:50:08.377248376 +0000 UTC m=+780.149997743" watchObservedRunningTime="2024-07-30 00:50:08.377727113 +0000 UTC m=+780.150476465"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 00:50:43.088228  524430 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19346-495103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-161305 -n ha-161305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-161305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 stop -v=7 --alsologtostderr
E0730 00:51:10.081876  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 stop -v=7 --alsologtostderr: exit status 82 (2m0.474833032s)

                                                
                                                
-- stdout --
	* Stopping node "ha-161305-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:51:02.918040  524848 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:51:02.918523  524848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:51:02.918537  524848 out.go:304] Setting ErrFile to fd 2...
	I0730 00:51:02.918543  524848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:51:02.918760  524848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:51:02.919025  524848 out.go:298] Setting JSON to false
	I0730 00:51:02.919137  524848 mustload.go:65] Loading cluster: ha-161305
	I0730 00:51:02.919518  524848 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:51:02.919630  524848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:51:02.919833  524848 mustload.go:65] Loading cluster: ha-161305
	I0730 00:51:02.919984  524848 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:51:02.920031  524848 stop.go:39] StopHost: ha-161305-m04
	I0730 00:51:02.920395  524848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:51:02.920453  524848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:51:02.935862  524848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44339
	I0730 00:51:02.936558  524848 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:51:02.937138  524848 main.go:141] libmachine: Using API Version  1
	I0730 00:51:02.937162  524848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:51:02.937506  524848 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:51:02.940159  524848 out.go:177] * Stopping node "ha-161305-m04"  ...
	I0730 00:51:02.941638  524848 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0730 00:51:02.941672  524848 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:51:02.941987  524848 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0730 00:51:02.942055  524848 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:51:02.944739  524848 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:51:02.945171  524848 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:50:29 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:51:02.945201  524848 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:51:02.945370  524848 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:51:02.945582  524848 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:51:02.945769  524848 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:51:02.945900  524848 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	I0730 00:51:03.028120  524848 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0730 00:51:03.080326  524848 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0730 00:51:03.132218  524848 main.go:141] libmachine: Stopping "ha-161305-m04"...
	I0730 00:51:03.132249  524848 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:51:03.133912  524848 main.go:141] libmachine: (ha-161305-m04) Calling .Stop
	I0730 00:51:03.137693  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 0/120
	I0730 00:51:04.139624  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 1/120
	I0730 00:51:05.141157  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 2/120
	I0730 00:51:06.142520  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 3/120
	I0730 00:51:07.144935  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 4/120
	I0730 00:51:08.147068  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 5/120
	I0730 00:51:09.148589  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 6/120
	I0730 00:51:10.149974  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 7/120
	I0730 00:51:11.151210  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 8/120
	I0730 00:51:12.152661  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 9/120
	I0730 00:51:13.155161  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 10/120
	I0730 00:51:14.157154  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 11/120
	I0730 00:51:15.159622  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 12/120
	I0730 00:51:16.161018  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 13/120
	I0730 00:51:17.162512  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 14/120
	I0730 00:51:18.163777  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 15/120
	I0730 00:51:19.165149  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 16/120
	I0730 00:51:20.166529  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 17/120
	I0730 00:51:21.168107  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 18/120
	I0730 00:51:22.169556  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 19/120
	I0730 00:51:23.171895  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 20/120
	I0730 00:51:24.173560  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 21/120
	I0730 00:51:25.174903  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 22/120
	I0730 00:51:26.176326  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 23/120
	I0730 00:51:27.177977  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 24/120
	I0730 00:51:28.179493  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 25/120
	I0730 00:51:29.180895  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 26/120
	I0730 00:51:30.183298  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 27/120
	I0730 00:51:31.184607  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 28/120
	I0730 00:51:32.186102  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 29/120
	I0730 00:51:33.187812  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 30/120
	I0730 00:51:34.189274  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 31/120
	I0730 00:51:35.191270  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 32/120
	I0730 00:51:36.192598  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 33/120
	I0730 00:51:37.194043  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 34/120
	I0730 00:51:38.196130  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 35/120
	I0730 00:51:39.197354  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 36/120
	I0730 00:51:40.199276  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 37/120
	I0730 00:51:41.200619  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 38/120
	I0730 00:51:42.202035  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 39/120
	I0730 00:51:43.204870  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 40/120
	I0730 00:51:44.207193  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 41/120
	I0730 00:51:45.208495  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 42/120
	I0730 00:51:46.209772  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 43/120
	I0730 00:51:47.212030  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 44/120
	I0730 00:51:48.213668  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 45/120
	I0730 00:51:49.215116  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 46/120
	I0730 00:51:50.216473  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 47/120
	I0730 00:51:51.217908  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 48/120
	I0730 00:51:52.219395  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 49/120
	I0730 00:51:53.221408  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 50/120
	I0730 00:51:54.222838  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 51/120
	I0730 00:51:55.224260  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 52/120
	I0730 00:51:56.226469  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 53/120
	I0730 00:51:57.227896  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 54/120
	I0730 00:51:58.230021  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 55/120
	I0730 00:51:59.232279  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 56/120
	I0730 00:52:00.233842  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 57/120
	I0730 00:52:01.235138  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 58/120
	I0730 00:52:02.236814  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 59/120
	I0730 00:52:03.239036  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 60/120
	I0730 00:52:04.240514  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 61/120
	I0730 00:52:05.242068  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 62/120
	I0730 00:52:06.243371  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 63/120
	I0730 00:52:07.244768  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 64/120
	I0730 00:52:08.246546  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 65/120
	I0730 00:52:09.247939  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 66/120
	I0730 00:52:10.249365  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 67/120
	I0730 00:52:11.251060  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 68/120
	I0730 00:52:12.252651  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 69/120
	I0730 00:52:13.254728  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 70/120
	I0730 00:52:14.256141  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 71/120
	I0730 00:52:15.257728  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 72/120
	I0730 00:52:16.259221  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 73/120
	I0730 00:52:17.260798  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 74/120
	I0730 00:52:18.262863  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 75/120
	I0730 00:52:19.265150  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 76/120
	I0730 00:52:20.267384  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 77/120
	I0730 00:52:21.268695  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 78/120
	I0730 00:52:22.270091  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 79/120
	I0730 00:52:23.272421  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 80/120
	I0730 00:52:24.273942  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 81/120
	I0730 00:52:25.275307  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 82/120
	I0730 00:52:26.276755  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 83/120
	I0730 00:52:27.278544  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 84/120
	I0730 00:52:28.280658  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 85/120
	I0730 00:52:29.281981  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 86/120
	I0730 00:52:30.283282  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 87/120
	I0730 00:52:31.285035  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 88/120
	I0730 00:52:32.286533  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 89/120
	I0730 00:52:33.288318  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 90/120
	I0730 00:52:34.289950  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 91/120
	I0730 00:52:35.291408  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 92/120
	I0730 00:52:36.292657  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 93/120
	I0730 00:52:37.294111  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 94/120
	I0730 00:52:38.296208  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 95/120
	I0730 00:52:39.297709  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 96/120
	I0730 00:52:40.299312  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 97/120
	I0730 00:52:41.300523  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 98/120
	I0730 00:52:42.302981  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 99/120
	I0730 00:52:43.304996  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 100/120
	I0730 00:52:44.307277  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 101/120
	I0730 00:52:45.308749  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 102/120
	I0730 00:52:46.310024  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 103/120
	I0730 00:52:47.311199  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 104/120
	I0730 00:52:48.313294  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 105/120
	I0730 00:52:49.315535  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 106/120
	I0730 00:52:50.316642  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 107/120
	I0730 00:52:51.318059  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 108/120
	I0730 00:52:52.319669  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 109/120
	I0730 00:52:53.321893  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 110/120
	I0730 00:52:54.323774  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 111/120
	I0730 00:52:55.325903  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 112/120
	I0730 00:52:56.327803  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 113/120
	I0730 00:52:57.329317  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 114/120
	I0730 00:52:58.331788  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 115/120
	I0730 00:52:59.333514  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 116/120
	I0730 00:53:00.334845  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 117/120
	I0730 00:53:01.336299  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 118/120
	I0730 00:53:02.338753  524848 main.go:141] libmachine: (ha-161305-m04) Waiting for machine to stop 119/120
	I0730 00:53:03.340056  524848 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0730 00:53:03.340135  524848 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0730 00:53:03.341986  524848 out.go:177] 
	W0730 00:53:03.343125  524848 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0730 00:53:03.343141  524848 out.go:239] * 
	* 
	W0730 00:53:03.346264  524848 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0730 00:53:03.347551  524848 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-161305 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr: exit status 3 (18.909425565s)

                                                
                                                
-- stdout --
	ha-161305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-161305-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:53:03.398136  525278 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:53:03.398260  525278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:53:03.398272  525278 out.go:304] Setting ErrFile to fd 2...
	I0730 00:53:03.398277  525278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:53:03.398495  525278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:53:03.398725  525278 out.go:298] Setting JSON to false
	I0730 00:53:03.398758  525278 mustload.go:65] Loading cluster: ha-161305
	I0730 00:53:03.398865  525278 notify.go:220] Checking for updates...
	I0730 00:53:03.399226  525278 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:53:03.399245  525278 status.go:255] checking status of ha-161305 ...
	I0730 00:53:03.399641  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.399713  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.419198  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I0730 00:53:03.419764  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.420394  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.420422  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.420747  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.420940  525278 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:53:03.422696  525278 status.go:330] ha-161305 host status = "Running" (err=<nil>)
	I0730 00:53:03.422715  525278 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:53:03.422991  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.423036  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.439930  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35703
	I0730 00:53:03.440468  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.441114  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.441142  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.441433  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.441589  525278 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:53:03.444718  525278 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:03.445238  525278 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:03.445266  525278 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:03.445371  525278 host.go:66] Checking if "ha-161305" exists ...
	I0730 00:53:03.445737  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.445797  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.461196  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0730 00:53:03.461584  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.462028  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.462050  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.462350  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.462487  525278 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:03.462701  525278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:53:03.462731  525278 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:03.465701  525278 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:03.466276  525278 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:03.466309  525278 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:03.466434  525278 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:03.466601  525278 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:03.466797  525278 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:03.466950  525278 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:53:03.562588  525278 ssh_runner.go:195] Run: systemctl --version
	I0730 00:53:03.571433  525278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:53:03.594327  525278 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:53:03.594358  525278 api_server.go:166] Checking apiserver status ...
	I0730 00:53:03.594391  525278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:53:03.618278  525278 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4984/cgroup
	W0730 00:53:03.630225  525278 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4984/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:53:03.630273  525278 ssh_runner.go:195] Run: ls
	I0730 00:53:03.634980  525278 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:53:03.639439  525278 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:53:03.639467  525278 status.go:422] ha-161305 apiserver status = Running (err=<nil>)
	I0730 00:53:03.639479  525278 status.go:257] ha-161305 status: &{Name:ha-161305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:53:03.639503  525278 status.go:255] checking status of ha-161305-m02 ...
	I0730 00:53:03.639796  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.639838  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.656625  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39039
	I0730 00:53:03.657077  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.657571  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.657593  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.657892  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.658080  525278 main.go:141] libmachine: (ha-161305-m02) Calling .GetState
	I0730 00:53:03.659668  525278 status.go:330] ha-161305-m02 host status = "Running" (err=<nil>)
	I0730 00:53:03.659689  525278 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:53:03.659974  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.660011  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.675177  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I0730 00:53:03.675630  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.676131  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.676158  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.676467  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.676740  525278 main.go:141] libmachine: (ha-161305-m02) Calling .GetIP
	I0730 00:53:03.679632  525278 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:53:03.680022  525278 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:48:30 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:53:03.680053  525278 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:53:03.680192  525278 host.go:66] Checking if "ha-161305-m02" exists ...
	I0730 00:53:03.680497  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.680544  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.697046  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0730 00:53:03.697483  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.698040  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.698065  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.698399  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.698638  525278 main.go:141] libmachine: (ha-161305-m02) Calling .DriverName
	I0730 00:53:03.698854  525278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:53:03.698877  525278 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHHostname
	I0730 00:53:03.701983  525278 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:53:03.702485  525278 main.go:141] libmachine: (ha-161305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:e3:c9", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:48:30 +0000 UTC Type:0 Mac:52:54:00:44:e3:c9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-161305-m02 Clientid:01:52:54:00:44:e3:c9}
	I0730 00:53:03.702515  525278 main.go:141] libmachine: (ha-161305-m02) DBG | domain ha-161305-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:44:e3:c9 in network mk-ha-161305
	I0730 00:53:03.702711  525278 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHPort
	I0730 00:53:03.702862  525278 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHKeyPath
	I0730 00:53:03.703058  525278 main.go:141] libmachine: (ha-161305-m02) Calling .GetSSHUsername
	I0730 00:53:03.703217  525278 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m02/id_rsa Username:docker}
	I0730 00:53:03.797760  525278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 00:53:03.817132  525278 kubeconfig.go:125] found "ha-161305" server: "https://192.168.39.254:8443"
	I0730 00:53:03.817166  525278 api_server.go:166] Checking apiserver status ...
	I0730 00:53:03.817206  525278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 00:53:03.838946  525278 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0730 00:53:03.848580  525278 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 00:53:03.848640  525278 ssh_runner.go:195] Run: ls
	I0730 00:53:03.853203  525278 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0730 00:53:03.857541  525278 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0730 00:53:03.857564  525278 status.go:422] ha-161305-m02 apiserver status = Running (err=<nil>)
	I0730 00:53:03.857572  525278 status.go:257] ha-161305-m02 status: &{Name:ha-161305-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 00:53:03.857589  525278 status.go:255] checking status of ha-161305-m04 ...
	I0730 00:53:03.857902  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.857939  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.873698  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I0730 00:53:03.874124  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.874607  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.874628  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.874971  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.875190  525278 main.go:141] libmachine: (ha-161305-m04) Calling .GetState
	I0730 00:53:03.876786  525278 status.go:330] ha-161305-m04 host status = "Running" (err=<nil>)
	I0730 00:53:03.876805  525278 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:53:03.877194  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.877238  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.892009  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0730 00:53:03.892454  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.892916  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.892939  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.893368  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.893605  525278 main.go:141] libmachine: (ha-161305-m04) Calling .GetIP
	I0730 00:53:03.896339  525278 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:53:03.896722  525278 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:50:29 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:53:03.896758  525278 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:53:03.896886  525278 host.go:66] Checking if "ha-161305-m04" exists ...
	I0730 00:53:03.897179  525278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:03.897220  525278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:03.912010  525278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0730 00:53:03.912449  525278 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:03.912895  525278 main.go:141] libmachine: Using API Version  1
	I0730 00:53:03.912921  525278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:03.913311  525278 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:03.913506  525278 main.go:141] libmachine: (ha-161305-m04) Calling .DriverName
	I0730 00:53:03.913713  525278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 00:53:03.913740  525278 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHHostname
	I0730 00:53:03.916287  525278 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:53:03.916685  525278 main.go:141] libmachine: (ha-161305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:6f:05", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:50:29 +0000 UTC Type:0 Mac:52:54:00:3d:6f:05 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-161305-m04 Clientid:01:52:54:00:3d:6f:05}
	I0730 00:53:03.916733  525278 main.go:141] libmachine: (ha-161305-m04) DBG | domain ha-161305-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:3d:6f:05 in network mk-ha-161305
	I0730 00:53:03.916819  525278 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHPort
	I0730 00:53:03.917017  525278 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHKeyPath
	I0730 00:53:03.917159  525278 main.go:141] libmachine: (ha-161305-m04) Calling .GetSSHUsername
	I0730 00:53:03.917304  525278 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305-m04/id_rsa Username:docker}
	W0730 00:53:22.256919  525278 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0730 00:53:22.257030  525278 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0730 00:53:22.257048  525278 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0730 00:53:22.257058  525278 status.go:257] ha-161305-m04 status: &{Name:ha-161305-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0730 00:53:22.257094  525278 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-161305 -n ha-161305
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-161305 logs -n 25: (1.595064442s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m04 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp testdata/cp-test.txt                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305:/home/docker/cp-test_ha-161305-m04_ha-161305.txt                       |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305 sudo cat                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305.txt                                 |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03:/home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m03 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-161305 node stop m02 -v=7                                                     | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-161305 node start m02 -v=7                                                    | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-161305 -v=7                                                           | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-161305 -v=7                                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-161305 --wait=true -v=7                                                    | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:46 UTC | 30 Jul 24 00:50 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-161305                                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:50 UTC |                     |
	| node    | ha-161305 node delete m03 -v=7                                                   | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:50 UTC | 30 Jul 24 00:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-161305 stop -v=7                                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:46:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:46:37.590312  523084 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:46:37.590475  523084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:46:37.590485  523084 out.go:304] Setting ErrFile to fd 2...
	I0730 00:46:37.590491  523084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:46:37.590681  523084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:46:37.591316  523084 out.go:298] Setting JSON to false
	I0730 00:46:37.592426  523084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8940,"bootTime":1722291458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:46:37.592488  523084 start.go:139] virtualization: kvm guest
	I0730 00:46:37.595766  523084 out.go:177] * [ha-161305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:46:37.597262  523084 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:46:37.597279  523084 notify.go:220] Checking for updates...
	I0730 00:46:37.599712  523084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:46:37.600963  523084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:46:37.602222  523084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:46:37.603543  523084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:46:37.604753  523084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:46:37.606568  523084 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:46:37.606731  523084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:46:37.607401  523084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:46:37.607491  523084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:46:37.622902  523084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0730 00:46:37.623409  523084 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:46:37.624003  523084 main.go:141] libmachine: Using API Version  1
	I0730 00:46:37.624027  523084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:46:37.624437  523084 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:46:37.624775  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:46:37.660457  523084 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 00:46:37.661860  523084 start.go:297] selected driver: kvm2
	I0730 00:46:37.661887  523084 start.go:901] validating driver "kvm2" against &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:46:37.662349  523084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:46:37.662657  523084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:46:37.662725  523084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:46:37.679193  523084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:46:37.679878  523084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:46:37.679945  523084 cni.go:84] Creating CNI manager for ""
	I0730 00:46:37.679956  523084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0730 00:46:37.680024  523084 start.go:340] cluster config:
	{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:46:37.680151  523084 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:46:37.682084  523084 out.go:177] * Starting "ha-161305" primary control-plane node in "ha-161305" cluster
	I0730 00:46:37.683444  523084 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:46:37.683490  523084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:46:37.683500  523084 cache.go:56] Caching tarball of preloaded images
	I0730 00:46:37.683576  523084 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:46:37.683586  523084 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:46:37.683701  523084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:46:37.683893  523084 start.go:360] acquireMachinesLock for ha-161305: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:46:37.683954  523084 start.go:364] duration metric: took 41.973µs to acquireMachinesLock for "ha-161305"
	I0730 00:46:37.683972  523084 start.go:96] Skipping create...Using existing machine configuration
	I0730 00:46:37.683985  523084 fix.go:54] fixHost starting: 
	I0730 00:46:37.684230  523084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:46:37.684261  523084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:46:37.698933  523084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0730 00:46:37.699395  523084 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:46:37.699933  523084 main.go:141] libmachine: Using API Version  1
	I0730 00:46:37.699956  523084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:46:37.700310  523084 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:46:37.700480  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:46:37.700600  523084 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:46:37.702277  523084 fix.go:112] recreateIfNeeded on ha-161305: state=Running err=<nil>
	W0730 00:46:37.702301  523084 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 00:46:37.704994  523084 out.go:177] * Updating the running kvm2 "ha-161305" VM ...
	I0730 00:46:37.706355  523084 machine.go:94] provisionDockerMachine start ...
	I0730 00:46:37.706380  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:46:37.706588  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:37.709124  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.709617  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:37.709647  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.709758  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:37.709948  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.710115  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.710241  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:37.710427  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:37.710632  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:37.710646  523084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 00:46:37.834233  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:46:37.834266  523084 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:46:37.834533  523084 buildroot.go:166] provisioning hostname "ha-161305"
	I0730 00:46:37.834559  523084 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:46:37.834781  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:37.837773  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.838226  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:37.838251  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.838495  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:37.838701  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.838866  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.839056  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:37.839226  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:37.839452  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:37.839473  523084 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305 && echo "ha-161305" | sudo tee /etc/hostname
	I0730 00:46:37.967641  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:46:37.967682  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:37.972231  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.972564  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:37.972597  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:37.972814  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:37.973072  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.973250  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:37.973435  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:37.973605  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:37.973818  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:37.973844  523084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:46:38.089867  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:46:38.089905  523084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:46:38.089930  523084 buildroot.go:174] setting up certificates
	I0730 00:46:38.089939  523084 provision.go:84] configureAuth start
	I0730 00:46:38.089947  523084 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:46:38.090262  523084 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:46:38.092973  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.093384  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.093417  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.093596  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:38.096434  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.096818  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.096845  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.096993  523084 provision.go:143] copyHostCerts
	I0730 00:46:38.097031  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:46:38.097082  523084 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:46:38.097096  523084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:46:38.097161  523084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:46:38.097244  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:46:38.097262  523084 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:46:38.097269  523084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:46:38.097293  523084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:46:38.097338  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:46:38.097355  523084 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:46:38.097361  523084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:46:38.097382  523084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:46:38.097443  523084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305 san=[127.0.0.1 192.168.39.80 ha-161305 localhost minikube]
	I0730 00:46:38.242386  523084 provision.go:177] copyRemoteCerts
	I0730 00:46:38.242461  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:46:38.242495  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:38.245213  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.245557  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.245586  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.245747  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:38.245935  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:38.246136  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:38.246294  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:46:38.334655  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:46:38.334749  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0730 00:46:38.359794  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:46:38.359895  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:46:38.382530  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:46:38.382601  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:46:38.408356  523084 provision.go:87] duration metric: took 318.40262ms to configureAuth
	I0730 00:46:38.408391  523084 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:46:38.408655  523084 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:46:38.408761  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:46:38.411371  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.411701  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:46:38.411723  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:46:38.411920  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:46:38.412127  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:38.412362  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:46:38.412505  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:46:38.412686  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:46:38.412918  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:46:38.412944  523084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:48:09.211747  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:48:09.211787  523084 machine.go:97] duration metric: took 1m31.505412701s to provisionDockerMachine
	I0730 00:48:09.211807  523084 start.go:293] postStartSetup for "ha-161305" (driver="kvm2")
	I0730 00:48:09.211825  523084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:48:09.211878  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.212262  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:48:09.212295  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.215280  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.215672  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.215702  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.215812  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.215994  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.216174  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.216309  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:48:09.304489  523084 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:48:09.308599  523084 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:48:09.308641  523084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:48:09.308731  523084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:48:09.308814  523084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:48:09.308827  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:48:09.308910  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:48:09.317614  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:48:09.340221  523084 start.go:296] duration metric: took 128.39793ms for postStartSetup
	I0730 00:48:09.340270  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.340605  523084 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0730 00:48:09.340634  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.343109  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.343503  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.343530  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.343710  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.343915  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.344064  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.344220  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	W0730 00:48:09.431138  523084 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0730 00:48:09.431174  523084 fix.go:56] duration metric: took 1m31.747193892s for fixHost
	I0730 00:48:09.431212  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.433724  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.434081  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.434110  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.434264  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.434447  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.434621  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.434704  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.434834  523084 main.go:141] libmachine: Using SSH client type: native
	I0730 00:48:09.435046  523084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:48:09.435059  523084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:48:09.545337  523084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722300489.507743472
	
	I0730 00:48:09.545359  523084 fix.go:216] guest clock: 1722300489.507743472
	I0730 00:48:09.545367  523084 fix.go:229] Guest: 2024-07-30 00:48:09.507743472 +0000 UTC Remote: 2024-07-30 00:48:09.431181664 +0000 UTC m=+91.877567347 (delta=76.561808ms)
	I0730 00:48:09.545386  523084 fix.go:200] guest clock delta is within tolerance: 76.561808ms
	I0730 00:48:09.545392  523084 start.go:83] releasing machines lock for "ha-161305", held for 1m31.861425818s
	I0730 00:48:09.545436  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.545676  523084 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:48:09.548265  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.548619  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.548643  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.548836  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.549379  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.549566  523084 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:48:09.549664  523084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:48:09.549726  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.549756  523084 ssh_runner.go:195] Run: cat /version.json
	I0730 00:48:09.549783  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:48:09.552147  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.552465  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.552548  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.552570  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.552695  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.552849  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:09.552868  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.552870  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:09.553032  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.553065  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:48:09.553191  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:48:09.553179  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:48:09.553362  523084 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:48:09.553508  523084 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:48:09.665925  523084 ssh_runner.go:195] Run: systemctl --version
	I0730 00:48:09.671881  523084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:48:09.834017  523084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:48:09.847016  523084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:48:09.847092  523084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:48:09.855718  523084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 00:48:09.855753  523084 start.go:495] detecting cgroup driver to use...
	I0730 00:48:09.855836  523084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:48:09.871170  523084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:48:09.885574  523084 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:48:09.885645  523084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:48:09.899356  523084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:48:09.912895  523084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:48:10.058035  523084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:48:10.201818  523084 docker.go:233] disabling docker service ...
	I0730 00:48:10.201892  523084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:48:10.217976  523084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:48:10.231647  523084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:48:10.376508  523084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:48:10.521729  523084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:48:10.535726  523084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:48:10.553426  523084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:48:10.553495  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.563277  523084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:48:10.563353  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.573106  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.582679  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.592273  523084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:48:10.602125  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.611806  523084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.622038  523084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:48:10.631437  523084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:48:10.639954  523084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:48:10.648234  523084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:48:10.792016  523084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:48:19.388392  523084 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.596329469s)
	I0730 00:48:19.388426  523084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:48:19.388485  523084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:48:19.393268  523084 start.go:563] Will wait 60s for crictl version
	I0730 00:48:19.393340  523084 ssh_runner.go:195] Run: which crictl
	I0730 00:48:19.396948  523084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:48:19.437444  523084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:48:19.437556  523084 ssh_runner.go:195] Run: crio --version
	I0730 00:48:19.466451  523084 ssh_runner.go:195] Run: crio --version
	I0730 00:48:19.495176  523084 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:48:19.496455  523084 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:48:19.499397  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:19.499744  523084 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:48:19.499773  523084 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:48:19.499951  523084 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:48:19.504529  523084 kubeadm.go:883] updating cluster {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:48:19.504688  523084 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:48:19.504761  523084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:48:19.547027  523084 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:48:19.547049  523084 crio.go:433] Images already preloaded, skipping extraction
	I0730 00:48:19.547109  523084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:48:19.579733  523084 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:48:19.579757  523084 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:48:19.579767  523084 kubeadm.go:934] updating node { 192.168.39.80 8443 v1.30.3 crio true true} ...
	I0730 00:48:19.579877  523084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:48:19.579940  523084 ssh_runner.go:195] Run: crio config
	I0730 00:48:19.628868  523084 cni.go:84] Creating CNI manager for ""
	I0730 00:48:19.628887  523084 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0730 00:48:19.628896  523084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:48:19.628918  523084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-161305 NodeName:ha-161305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:48:19.629149  523084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-161305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:48:19.629173  523084 kube-vip.go:115] generating kube-vip config ...
	I0730 00:48:19.629232  523084 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:48:19.640609  523084 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:48:19.640741  523084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:48:19.640802  523084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:48:19.650060  523084 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:48:19.650149  523084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0730 00:48:19.658991  523084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0730 00:48:19.674738  523084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:48:19.689881  523084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0730 00:48:19.705084  523084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:48:19.722523  523084 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:48:19.726227  523084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:48:19.869398  523084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:48:19.883718  523084 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.80
	I0730 00:48:19.883744  523084 certs.go:194] generating shared ca certs ...
	I0730 00:48:19.883770  523084 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:48:19.883969  523084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:48:19.884064  523084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:48:19.884092  523084 certs.go:256] generating profile certs ...
	I0730 00:48:19.884193  523084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:48:19.884234  523084 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208
	I0730 00:48:19.884256  523084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.23 192.168.39.254]
	I0730 00:48:20.095553  523084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208 ...
	I0730 00:48:20.095583  523084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208: {Name:mka5a7d713a84be5a244cfd9bca850e3421af976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:48:20.095751  523084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208 ...
	I0730 00:48:20.095766  523084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208: {Name:mk8bd8d9a97bc0f3d72fcacd0dc6794358fcd73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:48:20.095838  523084 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.6d2de208 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:48:20.096003  523084 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.6d2de208 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:48:20.096150  523084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:48:20.096167  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:48:20.096181  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:48:20.096198  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:48:20.096211  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:48:20.096223  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:48:20.096235  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:48:20.096252  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:48:20.096264  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:48:20.096312  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:48:20.096339  523084 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:48:20.096350  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:48:20.096374  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:48:20.096395  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:48:20.096417  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:48:20.096454  523084 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:48:20.096480  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.096496  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.096508  523084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.097243  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:48:20.121509  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:48:20.144334  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:48:20.167901  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:48:20.191485  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0730 00:48:20.213496  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 00:48:20.235569  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:48:20.258616  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:48:20.281395  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:48:20.304276  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:48:20.326844  523084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:48:20.349171  523084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:48:20.365089  523084 ssh_runner.go:195] Run: openssl version
	I0730 00:48:20.370768  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:48:20.381669  523084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.385957  523084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.386020  523084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:48:20.391428  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:48:20.400319  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:48:20.410812  523084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.415046  523084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.415101  523084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:48:20.420315  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:48:20.429199  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:48:20.439417  523084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.443496  523084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.443552  523084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:48:20.448845  523084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:48:20.457967  523084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:48:20.462378  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 00:48:20.467557  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 00:48:20.472669  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 00:48:20.478358  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 00:48:20.483672  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 00:48:20.488624  523084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 00:48:20.493735  523084 kubeadm.go:392] StartCluster: {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:48:20.493885  523084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:48:20.493956  523084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:48:20.529201  523084 cri.go:89] found id: "febe530e8cd22403160bb777a5267b14031496dcfd51e5ea49161e00e10b9a02"
	I0730 00:48:20.529226  523084 cri.go:89] found id: "cd75198115c64db74cb8fb79c24b6c0ddb58caaa9bdbd571d858c68d1492e34b"
	I0730 00:48:20.529230  523084 cri.go:89] found id: "05222e14df442628b1f405e4a28c1aa205a2a26a2895a63719aa2d3d3caaa86e"
	I0730 00:48:20.529235  523084 cri.go:89] found id: "2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81"
	I0730 00:48:20.529239  523084 cri.go:89] found id: "f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b"
	I0730 00:48:20.529243  523084 cri.go:89] found id: "922c527ae0dbe9b80f260c1b0f731bd1f2288293e374d28cc401ed825ad66c28"
	I0730 00:48:20.529248  523084 cri.go:89] found id: "625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0"
	I0730 00:48:20.529252  523084 cri.go:89] found id: "1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2"
	I0730 00:48:20.529255  523084 cri.go:89] found id: "3d24c7873d0386c4808a24575ed08832f7f63f8fb8afa4a46a143cb1ef082458"
	I0730 00:48:20.529263  523084 cri.go:89] found id: "a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb"
	I0730 00:48:20.529282  523084 cri.go:89] found id: "0555b883473bf6058a276e33aa31eda2ca0bb6a8a66e92c487c737cf7a5b1552"
	I0730 00:48:20.529287  523084 cri.go:89] found id: "16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12"
	I0730 00:48:20.529291  523084 cri.go:89] found id: "c20fcb6fb9f2b48ccbaa965301c88d20c4cbbf73f701731719356a2d23ce63c2"
	I0730 00:48:20.529295  523084 cri.go:89] found id: ""
	I0730 00:48:20.529357  523084 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.850516744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300802850487875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1eac359-16fe-42d7-a49b-34412e5f313f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.851144305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2df066e5-edd5-45ea-b41c-f0f092d06c96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.851205310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2df066e5-edd5-45ea-b41c-f0f092d06c96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.851630629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2df066e5-edd5-45ea-b41c-f0f092d06c96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.890748034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff4af5d1-2904-4fff-a89a-40a5f9963ab8 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.890837858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff4af5d1-2904-4fff-a89a-40a5f9963ab8 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.891781114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ff1db22-9790-4ddd-8f28-e269a7a8af24 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.892461761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300802892435630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ff1db22-9790-4ddd-8f28-e269a7a8af24 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.893000181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b52c82d3-6133-4c15-9d81-62734338c7fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.893056413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b52c82d3-6133-4c15-9d81-62734338c7fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.893465152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b52c82d3-6133-4c15-9d81-62734338c7fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.933174586Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49dfa063-fb18-4336-baa7-2963dc5d99ef name=/runtime.v1.RuntimeService/Version
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.933248215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49dfa063-fb18-4336-baa7-2963dc5d99ef name=/runtime.v1.RuntimeService/Version
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.934671583Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20c240b4-4a87-48b0-a990-e36f97539e59 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.935948839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300802935921245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20c240b4-4a87-48b0-a990-e36f97539e59 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.936527116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=068976c4-a52e-49bc-8fd1-c47a7224c730 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.936583347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=068976c4-a52e-49bc-8fd1-c47a7224c730 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.937024406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=068976c4-a52e-49bc-8fd1-c47a7224c730 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.977648402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=969a5031-c3f8-4d4a-8ab2-ba97908d15f8 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.977762048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=969a5031-c3f8-4d4a-8ab2-ba97908d15f8 name=/runtime.v1.RuntimeService/Version
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.987931815Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2dc7111-0379-4dd5-8451-d7344af6ce4e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.988463315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722300802988437921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2dc7111-0379-4dd5-8451-d7344af6ce4e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.989375561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23fd59a7-c57c-4347-a53f-5ecd286178c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.989437163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23fd59a7-c57c-4347-a53f-5ecd286178c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 00:53:22 ha-161305 crio[3748]: time="2024-07-30 00:53:22.989937448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722300587368902091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722300548417942569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300539617070118,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722300537988526839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d,PodSandboxId:0377dfc5f5117cc423ecf0e7564c9d0f44a785587e3fe95537c404c1cea9da74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722300533365432790,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300520078893576,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300506677821824,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300506475463520,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c0
4aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506533828163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300506420804254,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300506356450675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b,PodSandboxId:e9c7b84c6c909e0312f51d25db37411462611f9a8b00c5266371a55acdbd72e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300506228530979,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300506248244840,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55,PodSandboxId:d09f7c2c32def39846865da69b2bdde066d4399d5a917f585fe7083fb36d7fe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300506155627860,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Ann
otations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33787e97a5dcaddd5f2735501511ec5ef79b336c7c72e33131638d88f5c44dbc,PodSandboxId:1ce43d8d3ab67f3e27f91d528e0ed1bfe596fc7fc54a88db4d9dcf696481a18d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300002300280990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81,PodSandboxId:5d3af1b83b99280051be3f196294c0739af6f75c4c072ffe3417eb4b41567ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857592859588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b,PodSandboxId:fb1702cc4124558edb130062fe365cb0a69ed2354f3862a1e261ceec9b4be670,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722299857553339585,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0,PodSandboxId:ceb9cb15a729ff214196a39227f007772eac9cc71d5d16ab2ca9650ebe0e993e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722299845777144838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2,PodSandboxId:5821d52c1a1ddd6ac73f27a91ed802b7f8fa1a4497de9e525311fe20706f91d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722299841990836556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb,PodSandboxId:3f0cef29badb6147750c969d2af195cf236595178c72e1d904ee72e395a7847a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722299822323178898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12,PodSandboxId:cb4dface16b3855de1d697c0fa06c271f29698e9f0c5adde6b15e6ed6721bc4e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722299822148886240,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23fd59a7-c57c-4347-a53f-5ecd286178c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	571f739c3ec7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   0377dfc5f5117       storage-provisioner
	3034a674ef2bd       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   d09f7c2c32def       kube-controller-manager-ha-161305
	37637e74a1f33       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   45a56eb6f8ca1       busybox-fc5497c4f-ttjx8
	beb8a63139cdb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   e9c7b84c6c909       kube-apiserver-ha-161305
	dbeddb236c6c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   0377dfc5f5117       storage-provisioner
	eca65a5f97abc       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   f2cde2eb18016       kube-vip-ha-161305
	3794d8da6d031       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   62603cd489d83       kube-proxy-wptvn
	225f65c04aecc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   a7a7848979d5d       coredns-7db6d8ff4d-mzcln
	a4940cda3f54a       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   3452972572a3b       kindnet-zrzxf
	e7edc1afdc01a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   14b01800078de       coredns-7db6d8ff4d-bdpds
	3ab677666e42b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   5937bdc3a20dc       kube-scheduler-ha-161305
	090db2af84793       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   9818b8693e1bc       etcd-ha-161305
	e11b91a20a338       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   e9c7b84c6c909       kube-apiserver-ha-161305
	3b13100aa8cf3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   d09f7c2c32def       kube-controller-manager-ha-161305
	33787e97a5dca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   1ce43d8d3ab67       busybox-fc5497c4f-ttjx8
	2b2f636edadaa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   5d3af1b83b992       coredns-7db6d8ff4d-bdpds
	f6480acdda7d5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   fb1702cc41245       coredns-7db6d8ff4d-mzcln
	625a67c138c38       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   ceb9cb15a729f       kindnet-zrzxf
	1805553d07226       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   5821d52c1a1dd       kube-proxy-wptvn
	a2084c9181292       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   3f0cef29badb6       etcd-ha-161305
	16a5f7eb1118e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   cb4dface16b38       kube-scheduler-ha-161305
	
	
	==> coredns [225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2075708608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 00:48:38.336) (total time: 10441ms):
	Trace[2075708608]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer 10441ms (00:48:48.777)
	Trace[2075708608]: [10.4414144s] [10.4414144s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81] <==
	[INFO] 10.244.0.4:49078 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000206386s
	[INFO] 10.244.1.2:48352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113505s
	[INFO] 10.244.1.2:37780 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816793s
	[INFO] 10.244.1.2:33649 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128148s
	[INFO] 10.244.1.2:48051 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092471s
	[INFO] 10.244.1.2:36198 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007191s
	[INFO] 10.244.2.2:35489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018657s
	[INFO] 10.244.2.2:54354 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142599s
	[INFO] 10.244.2.2:58953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134101s
	[INFO] 10.244.2.2:60956 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078404s
	[INFO] 10.244.0.4:45817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115908s
	[INFO] 10.244.1.2:38448 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117252s
	[INFO] 10.244.1.2:37783 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087436s
	[INFO] 10.244.2.2:44186 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138301s
	[INFO] 10.244.0.4:42700 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000074904s
	[INFO] 10.244.0.4:41284 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112024s
	[INFO] 10.244.0.4:39360 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000096229s
	[INFO] 10.244.1.2:35167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095182s
	[INFO] 10.244.1.2:37860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007318s
	[INFO] 10.244.1.2:40179 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076418s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b] <==
	Trace[363621454]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:48:41.236)
	Trace[363621454]: [10.001813929s] [10.001813929s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[809198444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 00:48:38.223) (total time: 13405ms):
	Trace[809198444]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer 13405ms (00:48:51.628)
	Trace[809198444]: [13.405604121s] [13.405604121s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47108->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47108->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b] <==
	[INFO] 10.244.2.2:59859 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017939s
	[INFO] 10.244.2.2:41789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144993s
	[INFO] 10.244.2.2:46813 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143383s
	[INFO] 10.244.2.2:35590 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107787s
	[INFO] 10.244.0.4:40333 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147444s
	[INFO] 10.244.0.4:41070 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094914s
	[INFO] 10.244.0.4:60015 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119517s
	[INFO] 10.244.1.2:41685 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405792s
	[INFO] 10.244.1.2:48444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009825s
	[INFO] 10.244.1.2:38476 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107007s
	[INFO] 10.244.0.4:41768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098341s
	[INFO] 10.244.0.4:54976 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067321s
	[INFO] 10.244.0.4:60391 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053259s
	[INFO] 10.244.1.2:36807 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164322s
	[INFO] 10.244.1.2:38239 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011686s
	[INFO] 10.244.2.2:58831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129058s
	[INFO] 10.244.2.2:56804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134761s
	[INFO] 10.244.2.2:41613 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109006s
	[INFO] 10.244.0.4:60974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155306s
	[INFO] 10.244.1.2:58876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114279s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-161305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_37_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:37:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:53:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:49:07 +0000   Tue, 30 Jul 2024 00:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-161305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee5b503318a04d5fa9f6151b095f43f6
	  System UUID:                ee5b5033-18a0-4d5f-a9f6-151b095f43f6
	  Boot ID:                    c41944eb-218c-41cb-bf89-ac90ba0a8709
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ttjx8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-bdpds             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-mzcln             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-161305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-zrzxf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-161305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-161305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-wptvn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-161305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-161305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m13s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-161305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-161305 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-161305 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Warning  ContainerGCFailed        5m15s (x2 over 6m15s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	
	
	Name:               ha-161305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_38_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:38:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:53:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 00:49:51 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-161305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a157fd7e5c14479d97024c5548311976
	  System UUID:                a157fd7e-5c14-479d-9702-4c5548311976
	  Boot ID:                    4f645d45-ff44-451d-986c-85a804baaea9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v2pq7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-161305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-dj7v2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-161305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-161305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-pqr2f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-161305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-161305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-161305-m02 status is now: NodeNotReady
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s (x8 over 4m41s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x8 over 4m41s)  kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m41s)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal  RegisteredNode           3m6s                   node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	
	
	Name:               ha-161305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_40_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:40:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:50:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-161305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16981c9b42447afa5527547ca393cc7
	  System UUID:                b16981c9-b424-47af-a552-7547ca393cc7
	  Boot ID:                    cd17f5a2-30ac-44ae-8c6d-bf637a282fdf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7sdnf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-bdl2h              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-f9bfb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-161305-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-161305-m04 has been rebooted, boot id: cd17f5a2-30ac-44ae-8c6d-bf637a282fdf
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-161305-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m25s)   node-controller  Node ha-161305-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.201013] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.060589] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060160] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.175750] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.105381] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.262727] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +3.969960] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[Jul30 00:37] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063938] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.953682] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.085875] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.685156] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.526010] kauditd_printk_skb: 38 callbacks suppressed
	[Jul30 00:38] kauditd_printk_skb: 26 callbacks suppressed
	[Jul30 00:48] systemd-fstab-generator[3667]: Ignoring "noauto" option for root device
	[  +0.145039] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.168770] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	[  +0.148265] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.269338] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +9.072086] systemd-fstab-generator[3835]: Ignoring "noauto" option for root device
	[  +0.089847] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.952407] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.176537] kauditd_printk_skb: 97 callbacks suppressed
	[ +28.601116] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd] <==
	{"level":"info","ts":"2024-07-30T00:49:59.580467Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:49:59.581544Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:49:59.583784Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:49:59.584493Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d33e7f1dba1e46ae","to":"f9852bfb3a2ffd8d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-30T00:49:59.584598Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:02.333235Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:50:02.333245Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f9852bfb3a2ffd8d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-30T00:50:49.233207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae switched to configuration voters=(11771616502301995634 15221743556212180654)"}
	{"level":"info","ts":"2024-07-30T00:50:49.235556Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","removed-remote-peer-id":"f9852bfb3a2ffd8d","removed-remote-peer-urls":["https://192.168.39.23:2380"]}
	{"level":"info","ts":"2024-07-30T00:50:49.23563Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:49.236106Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:50:49.236177Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:49.23687Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:50:49.237064Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:50:49.237199Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:49.237426Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d","error":"context canceled"}
	{"level":"warn","ts":"2024-07-30T00:50:49.237486Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f9852bfb3a2ffd8d","error":"failed to read f9852bfb3a2ffd8d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-30T00:50:49.237523Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:49.237913Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d","error":"context canceled"}
	{"level":"info","ts":"2024-07-30T00:50:49.238014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:50:49.238035Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:50:49.23805Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"d33e7f1dba1e46ae","removed-remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:50:49.238098Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"d33e7f1dba1e46ae","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:49.251256Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"d33e7f1dba1e46ae","remote-peer-id-stream-handler":"d33e7f1dba1e46ae","remote-peer-id-from":"f9852bfb3a2ffd8d"}
	{"level":"warn","ts":"2024-07-30T00:50:49.263329Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"d33e7f1dba1e46ae","remote-peer-id-stream-handler":"d33e7f1dba1e46ae","remote-peer-id-from":"f9852bfb3a2ffd8d"}
	
	
	==> etcd [a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb] <==
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/30 00:46:38 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-30T00:46:38.599404Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T00:46:38.599708Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.80:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-30T00:46:38.601049Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d33e7f1dba1e46ae","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-30T00:46:38.601257Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.60127Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601293Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601377Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601404Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601434Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.60145Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:46:38.601455Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601463Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.60148Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601533Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601558Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.601581Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.60159Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f9852bfb3a2ffd8d"}
	{"level":"info","ts":"2024-07-30T00:46:38.604497Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-07-30T00:46:38.604707Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-07-30T00:46:38.604766Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-161305","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"]}
	
	
	==> kernel <==
	 00:53:23 up 16 min,  0 users,  load average: 0.01, 0.24, 0.28
	Linux ha-161305 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0] <==
	I0730 00:46:16.760501       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:46:16.760507       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:46:16.760701       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:46:16.760721       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:46:16.760793       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:46:16.760810       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:46:26.757441       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:46:26.757491       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:46:26.757650       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:46:26.757670       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:46:26.757722       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:46:26.757742       1 main.go:299] handling current node
	I0730 00:46:26.757764       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:46:26.757769       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	E0730 00:46:26.860545       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1844&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	W0730 00:46:29.932396       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	E0730 00:46:29.932462       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1844": dial tcp 10.96.0.1:443: connect: no route to host
	I0730 00:46:36.757464       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0730 00:46:36.757539       1 main.go:322] Node ha-161305-m03 has CIDR [10.244.2.0/24] 
	I0730 00:46:36.757686       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:46:36.757705       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:46:36.757760       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:46:36.757775       1 main.go:299] handling current node
	I0730 00:46:36.757791       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:46:36.757795       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5] <==
	I0730 00:52:37.688134       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:52:47.695718       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:52:47.695841       1 main.go:299] handling current node
	I0730 00:52:47.695870       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:52:47.695899       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:52:47.696093       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:52:47.696127       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:52:57.693232       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:52:57.693284       1 main.go:299] handling current node
	I0730 00:52:57.693303       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:52:57.693311       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:52:57.693491       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:52:57.693529       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:53:07.689316       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:53:07.689441       1 main.go:299] handling current node
	I0730 00:53:07.689469       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:53:07.689487       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:53:07.689671       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:53:07.689729       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:53:17.689056       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:53:17.689165       1 main.go:299] handling current node
	I0730 00:53:17.689198       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:53:17.689217       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:53:17.689389       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:53:17.689416       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be] <==
	I0730 00:49:04.169081       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0730 00:49:04.269632       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 00:49:04.279894       1 aggregator.go:165] initial CRD sync complete...
	I0730 00:49:04.280340       1 autoregister_controller.go:141] Starting autoregister controller
	I0730 00:49:04.280384       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 00:49:04.331260       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 00:49:04.335774       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 00:49:04.335864       1 policy_source.go:224] refreshing policies
	I0730 00:49:04.367456       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 00:49:04.367554       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 00:49:04.368695       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 00:49:04.369326       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 00:49:04.369384       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 00:49:04.369340       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 00:49:04.374804       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0730 00:49:04.382496       1 cache.go:39] Caches are synced for autoregister controller
	W0730 00:49:04.387719       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23]
	I0730 00:49:04.389218       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 00:49:04.392436       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0730 00:49:04.403676       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0730 00:49:04.409228       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0730 00:49:05.174481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0730 00:49:05.530471       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.80]
	W0730 00:49:25.533586       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.126 192.168.39.80]
	W0730 00:51:05.539638       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.126 192.168.39.80]
	
	
	==> kube-apiserver [e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b] <==
	I0730 00:48:26.980191       1 options.go:221] external host was not specified, using 192.168.39.80
	I0730 00:48:26.981314       1 server.go:148] Version: v1.30.3
	I0730 00:48:26.982255       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:48:27.768723       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0730 00:48:27.770076       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 00:48:27.771859       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0730 00:48:27.771924       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0730 00:48:27.772133       1 instance.go:299] Using reconciler: lease
	W0730 00:48:47.766575       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0730 00:48:47.766765       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0730 00:48:47.773013       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0730 00:48:47.773016       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7] <==
	I0730 00:50:45.943382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.51794ms"
	I0730 00:50:45.990818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.315454ms"
	I0730 00:50:46.064893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.003438ms"
	E0730 00:50:46.065107       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0730 00:50:46.065368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.279µs"
	I0730 00:50:46.071382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.215µs"
	I0730 00:50:47.954810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.781µs"
	I0730 00:50:48.206198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.068µs"
	I0730 00:50:48.222750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.978µs"
	I0730 00:50:48.227412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.093µs"
	I0730 00:50:49.763649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.115138ms"
	I0730 00:50:49.764036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.434µs"
	I0730 00:51:00.762013       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-161305-m04"
	E0730 00:51:20.094149       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:20.094196       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:20.094203       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:20.094208       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:20.094214       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	I0730 00:51:38.472706       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.261777ms"
	I0730 00:51:38.472840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.476µs"
	E0730 00:51:40.094629       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:40.095170       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:40.095243       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:40.095280       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	E0730 00:51:40.095307       1 gc_controller.go:153] "Failed to get node" err="node \"ha-161305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-161305-m03"
	
	
	==> kube-controller-manager [3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55] <==
	I0730 00:48:27.422433       1 serving.go:380] Generated self-signed cert in-memory
	I0730 00:48:28.228723       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0730 00:48:28.228760       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:48:28.230597       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0730 00:48:28.230751       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0730 00:48:28.232302       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0730 00:48:28.232375       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0730 00:48:48.777749       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.80:8443/healthz\": dial tcp 192.168.39.80:8443: connect: connection refused"
	
	
	==> kube-proxy [1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2] <==
	E0730 00:45:19.853076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:22.924319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:22.924401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:22.924471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:22.924508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:22.924319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:22.924579       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:29.516449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:29.516527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:29.516449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:29.516562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:29.516761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:29.516844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:38.733529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:38.733626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:38.733640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:38.733674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:41.805517       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:41.805621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:45:54.093269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:45:54.093333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:46:06.381025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:46:06.381106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:46:09.453358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:46:09.454004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&resourceVersion=1844": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f] <==
	I0730 00:48:28.260039       1 server_linux.go:69] "Using iptables proxy"
	E0730 00:48:30.765334       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:33.837250       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:36.908953       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:43.052445       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:52.268874       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0730 00:49:09.781706       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.80"]
	I0730 00:49:09.813651       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:49:09.813756       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:49:09.813786       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:49:09.816188       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:49:09.816436       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:49:09.816460       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:49:09.817946       1 config.go:192] "Starting service config controller"
	I0730 00:49:09.818049       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:49:09.818118       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:49:09.818137       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:49:09.818902       1 config.go:319] "Starting node config controller"
	I0730 00:49:09.818937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:49:09.918952       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 00:49:09.919067       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:49:09.919080       1 shared_informer.go:320] Caches are synced for service config
	W0730 00:51:50.555495       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0730 00:51:50.555670       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0730 00:51:50.555731       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12] <==
	W0730 00:46:33.445134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:33.445176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 00:46:34.101706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 00:46:34.101839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 00:46:34.177268       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:46:34.177315       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:46:34.631295       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 00:46:34.631393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 00:46:35.130472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0730 00:46:35.130517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0730 00:46:35.290905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 00:46:35.290950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0730 00:46:35.410283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:46:35.410388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:46:35.787271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:35.787319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0730 00:46:35.859667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 00:46:35.859806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 00:46:35.874789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:46:35.874877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:46:36.056161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:36.056210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 00:46:36.183016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:36.183058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 00:46:38.518229       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e] <==
	W0730 00:48:56.871568       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.80:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:56.871699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.80:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:56.909449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.80:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:56.909551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.80:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.034082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.034195       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.221307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.221364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.516154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.516202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.783858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.784029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.858869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.80:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.859101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.80:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.948932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.80:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.949834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.80:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:49:04.195620       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:49:04.196204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:49:04.197280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:49:04.197343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:49:04.197477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:49:04.197511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:49:04.197564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 00:49:04.197591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0730 00:49:11.292412       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 00:49:33 ha-161305 kubelet[1372]: E0730 00:49:33.354728    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75260b22-5ffc-4848-8c70-5b9cb3f010bf)\"" pod="kube-system/storage-provisioner" podUID="75260b22-5ffc-4848-8c70-5b9cb3f010bf"
	Jul 30 00:49:47 ha-161305 kubelet[1372]: I0730 00:49:47.354353    1372 scope.go:117] "RemoveContainer" containerID="dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d"
	Jul 30 00:50:04 ha-161305 kubelet[1372]: I0730 00:50:04.355047    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-161305" podUID="084d986e-4abd-4c66-aea9-5738f6a60ac5"
	Jul 30 00:50:04 ha-161305 kubelet[1372]: I0730 00:50:04.379118    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-161305"
	Jul 30 00:50:08 ha-161305 kubelet[1372]: E0730 00:50:08.373912    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:50:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:50:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:50:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:50:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:50:08 ha-161305 kubelet[1372]: I0730 00:50:08.377762    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-161305" podStartSLOduration=4.377727113 podStartE2EDuration="4.377727113s" podCreationTimestamp="2024-07-30 00:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-30 00:50:08.377248376 +0000 UTC m=+780.149997743" watchObservedRunningTime="2024-07-30 00:50:08.377727113 +0000 UTC m=+780.150476465"
	Jul 30 00:51:08 ha-161305 kubelet[1372]: E0730 00:51:08.372856    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:51:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:51:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:51:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:51:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:52:08 ha-161305 kubelet[1372]: E0730 00:52:08.372812    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:52:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:52:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:52:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:52:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 00:53:08 ha-161305 kubelet[1372]: E0730 00:53:08.373067    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 00:53:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 00:53:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 00:53:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 00:53:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 00:53:22.587241  525446 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19346-495103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-161305 -n ha-161305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-161305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (785.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-161305 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0730 00:53:42.935238  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:56:10.083898  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:56:45.982183  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:57:33.128167  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:58:42.934485  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 01:01:10.081759  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 01:03:42.934547  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 01:06:10.081595  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-161305 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 80 (13m3.518011471s)

                                                
                                                
-- stdout --
	* [ha-161305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19346
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-161305" primary control-plane node in "ha-161305" cluster
	* Updating the running kvm2 "ha-161305" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-161305-m02" control-plane node in "ha-161305" cluster
	* Updating the running kvm2 "ha-161305-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.80
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.80
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:53:24.553567  525511 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:53:24.553687  525511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:53:24.553697  525511 out.go:304] Setting ErrFile to fd 2...
	I0730 00:53:24.553701  525511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:53:24.553893  525511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:53:24.554468  525511 out.go:298] Setting JSON to false
	I0730 00:53:24.555481  525511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9347,"bootTime":1722291458,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:53:24.555540  525511 start.go:139] virtualization: kvm guest
	I0730 00:53:24.557819  525511 out.go:177] * [ha-161305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:53:24.559542  525511 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:53:24.559576  525511 notify.go:220] Checking for updates...
	I0730 00:53:24.561900  525511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:53:24.563216  525511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:53:24.564528  525511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:53:24.565732  525511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:53:24.566889  525511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:53:24.568642  525511 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:53:24.569329  525511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:24.569385  525511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:24.585045  525511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I0730 00:53:24.585526  525511 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:24.586042  525511 main.go:141] libmachine: Using API Version  1
	I0730 00:53:24.586069  525511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:24.586475  525511 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:24.586675  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.586963  525511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:53:24.587289  525511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:24.587331  525511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:24.603408  525511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0730 00:53:24.603807  525511 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:24.604245  525511 main.go:141] libmachine: Using API Version  1
	I0730 00:53:24.604275  525511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:24.604591  525511 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:24.604834  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.645144  525511 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 00:53:24.646544  525511 start.go:297] selected driver: kvm2
	I0730 00:53:24.646559  525511 start.go:901] validating driver "kvm2" against &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:53:24.646723  525511 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:53:24.647123  525511 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:53:24.647264  525511 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:53:24.664052  525511 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:53:24.665146  525511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:53:24.665207  525511 cni.go:84] Creating CNI manager for ""
	I0730 00:53:24.665216  525511 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 00:53:24.665312  525511 start.go:340] cluster config:
	{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:53:24.665532  525511 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:53:24.668223  525511 out.go:177] * Starting "ha-161305" primary control-plane node in "ha-161305" cluster
	I0730 00:53:24.669523  525511 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:53:24.669567  525511 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:53:24.669580  525511 cache.go:56] Caching tarball of preloaded images
	I0730 00:53:24.669668  525511 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:53:24.669685  525511 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:53:24.669824  525511 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:53:24.670032  525511 start.go:360] acquireMachinesLock for ha-161305: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:53:24.670077  525511 start.go:364] duration metric: took 25.78µs to acquireMachinesLock for "ha-161305"
	I0730 00:53:24.670097  525511 start.go:96] Skipping create...Using existing machine configuration
	I0730 00:53:24.670105  525511 fix.go:54] fixHost starting: 
	I0730 00:53:24.670379  525511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:24.670414  525511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:24.685596  525511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0730 00:53:24.685974  525511 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:24.686457  525511 main.go:141] libmachine: Using API Version  1
	I0730 00:53:24.686481  525511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:24.686816  525511 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:24.687042  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.687198  525511 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:53:24.688977  525511 fix.go:112] recreateIfNeeded on ha-161305: state=Running err=<nil>
	W0730 00:53:24.689002  525511 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 00:53:24.690970  525511 out.go:177] * Updating the running kvm2 "ha-161305" VM ...
	I0730 00:53:24.692812  525511 machine.go:94] provisionDockerMachine start ...
	I0730 00:53:24.692923  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.693376  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:24.696498  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.697131  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:24.697161  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.697436  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:24.697678  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.697848  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.698012  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:24.698217  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:24.698464  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:24.698479  525511 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 00:53:24.812656  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:53:24.812690  525511 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:53:24.812950  525511 buildroot.go:166] provisioning hostname "ha-161305"
	I0730 00:53:24.812981  525511 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:53:24.813163  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:24.816882  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:24.817911  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.817938  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.817960  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:24.817988  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.818131  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.818437  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:24.818627  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:24.818824  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:24.818838  525511 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305 && echo "ha-161305" | sudo tee /etc/hostname
	I0730 00:53:24.947560  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:53:24.947607  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:24.950312  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.950649  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:24.950673  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.950905  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:24.951132  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.951323  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.951485  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:24.951635  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:24.951812  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:24.951833  525511 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:53:25.065657  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:53:25.065691  525511 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:53:25.065713  525511 buildroot.go:174] setting up certificates
	I0730 00:53:25.065722  525511 provision.go:84] configureAuth start
	I0730 00:53:25.065733  525511 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:53:25.066028  525511 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:53:25.068826  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.069221  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.069249  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.069369  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:25.071794  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.072164  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.072182  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.072358  525511 provision.go:143] copyHostCerts
	I0730 00:53:25.072393  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:53:25.072449  525511 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:53:25.072467  525511 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:53:25.072548  525511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:53:25.072644  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:53:25.072670  525511 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:53:25.072680  525511 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:53:25.072741  525511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:53:25.072806  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:53:25.072829  525511 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:53:25.072837  525511 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:53:25.072871  525511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:53:25.072935  525511 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305 san=[127.0.0.1 192.168.39.80 ha-161305 localhost minikube]
	I0730 00:53:25.177489  525511 provision.go:177] copyRemoteCerts
	I0730 00:53:25.177553  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:53:25.177581  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:25.180165  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.180483  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.180508  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.180664  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:25.180898  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:25.181098  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:25.181238  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:53:25.269971  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:53:25.270052  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:53:25.297957  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:53:25.298039  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0730 00:53:25.326664  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:53:25.326742  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:53:25.358995  525511 provision.go:87] duration metric: took 293.253296ms to configureAuth
	I0730 00:53:25.359043  525511 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:53:25.359333  525511 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:53:25.359428  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:25.362624  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.362977  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.363010  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.363235  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:25.363425  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:25.363611  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:25.363758  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:25.363957  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:25.364173  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:25.364196  525511 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:55:00.012597  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:55:00.012633  525511 machine.go:97] duration metric: took 1m35.319747222s to provisionDockerMachine
	I0730 00:55:00.012652  525511 start.go:293] postStartSetup for "ha-161305" (driver="kvm2")
	I0730 00:55:00.012664  525511 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:55:00.012682  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.013103  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:55:00.013145  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.016233  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.016721  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.016750  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.016885  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.017080  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.017226  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.017342  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:55:00.105272  525511 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:55:00.109316  525511 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:55:00.109340  525511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:55:00.109417  525511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:55:00.109505  525511 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:55:00.109543  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:55:00.109648  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:55:00.118780  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:55:00.141382  525511 start.go:296] duration metric: took 128.715235ms for postStartSetup
	I0730 00:55:00.141427  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.141764  525511 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0730 00:55:00.141793  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.144560  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.144920  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.144948  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.145076  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.145348  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.145569  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.145736  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	W0730 00:55:00.230974  525511 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0730 00:55:00.231006  525511 fix.go:56] duration metric: took 1m35.560901937s for fixHost
	I0730 00:55:00.231030  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.233657  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.234048  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.234079  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.234261  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.234493  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.234670  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.234819  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.234984  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:55:00.235158  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:55:00.235171  525511 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0730 00:55:00.356275  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722300900.314749906
	
	I0730 00:55:00.356300  525511 fix.go:216] guest clock: 1722300900.314749906
	I0730 00:55:00.356315  525511 fix.go:229] Guest: 2024-07-30 00:55:00.314749906 +0000 UTC Remote: 2024-07-30 00:55:00.231014598 +0000 UTC m=+95.711873281 (delta=83.735308ms)
	I0730 00:55:00.356335  525511 fix.go:200] guest clock delta is within tolerance: 83.735308ms
	I0730 00:55:00.356344  525511 start.go:83] releasing machines lock for "ha-161305", held for 1m35.686252359s
	I0730 00:55:00.356363  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.356625  525511 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:55:00.359078  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.359473  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.359504  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.359653  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.360161  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.360362  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.360447  525511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:55:00.360494  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.360571  525511 ssh_runner.go:195] Run: cat /version.json
	I0730 00:55:00.360597  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.363066  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363096  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363443  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.363470  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363516  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.363548  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363579  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.363761  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.363763  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.363942  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.363952  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.364111  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.364137  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:55:00.364223  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:55:00.474022  525511 ssh_runner.go:195] Run: systemctl --version
	I0730 00:55:00.479834  525511 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:55:00.640248  525511 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:55:00.645812  525511 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:55:00.645890  525511 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:55:00.656663  525511 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 00:55:00.656690  525511 start.go:495] detecting cgroup driver to use...
	I0730 00:55:00.656776  525511 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:55:00.674989  525511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:55:00.689756  525511 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:55:00.689830  525511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:55:00.705809  525511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:55:00.718691  525511 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:55:00.864576  525511 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:55:01.008399  525511 docker.go:233] disabling docker service ...
	I0730 00:55:01.008474  525511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:55:01.025983  525511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:55:01.039534  525511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:55:01.185964  525511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:55:01.337949  525511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:55:01.352151  525511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:55:01.369681  525511 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:55:01.369798  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.381708  525511 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:55:01.381776  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.392651  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.402925  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.413191  525511 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:55:01.423307  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.433503  525511 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.443627  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.455179  525511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:55:01.465785  525511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:55:01.475085  525511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:55:01.645738  525511 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:55:12.549070  525511 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.903148846s)
	I0730 00:55:12.549114  525511 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:55:12.549164  525511 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:55:12.557670  525511 start.go:563] Will wait 60s for crictl version
	I0730 00:55:12.557741  525511 ssh_runner.go:195] Run: which crictl
	I0730 00:55:12.563399  525511 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:55:12.611536  525511 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:55:12.611636  525511 ssh_runner.go:195] Run: crio --version
	I0730 00:55:12.641392  525511 ssh_runner.go:195] Run: crio --version
	I0730 00:55:12.670857  525511 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:55:12.672180  525511 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:55:12.675012  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:12.675537  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:12.675568  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:12.675804  525511 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:55:12.680572  525511 kubeadm.go:883] updating cluster {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:55:12.680750  525511 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:55:12.680809  525511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:55:12.715050  525511 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:55:12.715075  525511 crio.go:433] Images already preloaded, skipping extraction
	I0730 00:55:12.715126  525511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:55:12.748383  525511 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:55:12.748404  525511 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:55:12.748413  525511 kubeadm.go:934] updating node { 192.168.39.80 8443 v1.30.3 crio true true} ...
	I0730 00:55:12.748532  525511 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:55:12.748615  525511 ssh_runner.go:195] Run: crio config
	I0730 00:55:12.798070  525511 cni.go:84] Creating CNI manager for ""
	I0730 00:55:12.798092  525511 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 00:55:12.798105  525511 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:55:12.798138  525511 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-161305 NodeName:ha-161305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:55:12.798297  525511 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-161305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:55:12.798318  525511 kube-vip.go:115] generating kube-vip config ...
	I0730 00:55:12.798371  525511 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:55:12.809596  525511 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:55:12.809705  525511 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:55:12.809765  525511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:55:12.818946  525511 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:55:12.819023  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0730 00:55:12.828397  525511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0730 00:55:12.846279  525511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:55:12.862615  525511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0730 00:55:12.880863  525511 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:55:12.900902  525511 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:55:12.905187  525511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:55:13.076422  525511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:55:13.092120  525511 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.80
	I0730 00:55:13.092150  525511 certs.go:194] generating shared ca certs ...
	I0730 00:55:13.092167  525511 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:55:13.092368  525511 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:55:13.092426  525511 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:55:13.092441  525511 certs.go:256] generating profile certs ...
	I0730 00:55:13.092530  525511 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:55:13.092570  525511 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999
	I0730 00:55:13.092592  525511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.254]
	I0730 00:55:13.301529  525511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999 ...
	I0730 00:55:13.301566  525511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999: {Name:mkc5ee9b03045e35981247f1df2f286af7fa675d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:55:13.301783  525511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999 ...
	I0730 00:55:13.301804  525511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999: {Name:mk809d5c6053b0224c61787ff8e636f5cf9f72dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:55:13.301918  525511 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:55:13.302135  525511 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:55:13.302340  525511 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:55:13.302363  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:55:13.302385  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:55:13.302405  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:55:13.302422  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:55:13.302453  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:55:13.302480  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:55:13.302505  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:55:13.302522  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:55:13.302591  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:55:13.302636  525511 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:55:13.302649  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:55:13.302680  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:55:13.302709  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:55:13.302740  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:55:13.302792  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:55:13.302833  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.302856  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.302874  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.303493  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:55:13.328830  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:55:13.351713  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:55:13.373851  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:55:13.396943  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0730 00:55:13.420988  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 00:55:13.443038  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:55:13.465032  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:55:13.486886  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:55:13.508579  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:55:13.530494  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:55:13.553169  525511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:55:13.568444  525511 ssh_runner.go:195] Run: openssl version
	I0730 00:55:13.574001  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:55:13.584034  525511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.588048  525511 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.588102  525511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.593397  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:55:13.602121  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:55:13.612074  525511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.616277  525511 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.616319  525511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.621869  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:55:13.630685  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:55:13.640562  525511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.644467  525511 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.644527  525511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.649871  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:55:13.658539  525511 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:55:13.662629  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 00:55:13.667827  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 00:55:13.672950  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 00:55:13.678422  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 00:55:13.683868  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 00:55:13.689116  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 00:55:13.694253  525511 kubeadm.go:392] StartCluster: {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:55:13.694374  525511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:55:13.694428  525511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:55:13.731019  525511 cri.go:89] found id: "571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb"
	I0730 00:55:13.731043  525511 cri.go:89] found id: "3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7"
	I0730 00:55:13.731047  525511 cri.go:89] found id: "beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be"
	I0730 00:55:13.731050  525511 cri.go:89] found id: "dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d"
	I0730 00:55:13.731053  525511 cri.go:89] found id: "eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94"
	I0730 00:55:13.731056  525511 cri.go:89] found id: "3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f"
	I0730 00:55:13.731059  525511 cri.go:89] found id: "225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866"
	I0730 00:55:13.731061  525511 cri.go:89] found id: "a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5"
	I0730 00:55:13.731064  525511 cri.go:89] found id: "e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b"
	I0730 00:55:13.731070  525511 cri.go:89] found id: "3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e"
	I0730 00:55:13.731072  525511 cri.go:89] found id: "090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd"
	I0730 00:55:13.731075  525511 cri.go:89] found id: "e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b"
	I0730 00:55:13.731081  525511 cri.go:89] found id: "3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55"
	I0730 00:55:13.731084  525511 cri.go:89] found id: "2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81"
	I0730 00:55:13.731089  525511 cri.go:89] found id: "f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b"
	I0730 00:55:13.731091  525511 cri.go:89] found id: "625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0"
	I0730 00:55:13.731094  525511 cri.go:89] found id: "1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2"
	I0730 00:55:13.731099  525511 cri.go:89] found id: "a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb"
	I0730 00:55:13.731102  525511 cri.go:89] found id: "16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12"
	I0730 00:55:13.731105  525511 cri.go:89] found id: ""
	I0730 00:55:13.731150  525511 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-161305 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-161305 -n ha-161305
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-161305 logs -n 25: (1.726091162s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m04 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp testdata/cp-test.txt                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305:/home/docker/cp-test_ha-161305-m04_ha-161305.txt                       |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305 sudo cat                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305.txt                                 |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m02:/home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m02 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m03:/home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n                                                                 | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | ha-161305-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-161305 ssh -n ha-161305-m03 sudo cat                                          | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC | 30 Jul 24 00:41 UTC |
	|         | /home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-161305 node stop m02 -v=7                                                     | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-161305 node start m02 -v=7                                                    | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-161305 -v=7                                                           | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-161305 -v=7                                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-161305 --wait=true -v=7                                                    | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:46 UTC | 30 Jul 24 00:50 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-161305                                                                | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:50 UTC |                     |
	| node    | ha-161305 node delete m03 -v=7                                                   | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:50 UTC | 30 Jul 24 00:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-161305 stop -v=7                                                              | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-161305 --wait=true                                                         | ha-161305 | jenkins | v1.33.1 | 30 Jul 24 00:53 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:53:24
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:53:24.553567  525511 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:53:24.553687  525511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:53:24.553697  525511 out.go:304] Setting ErrFile to fd 2...
	I0730 00:53:24.553701  525511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:53:24.553893  525511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:53:24.554468  525511 out.go:298] Setting JSON to false
	I0730 00:53:24.555481  525511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9347,"bootTime":1722291458,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:53:24.555540  525511 start.go:139] virtualization: kvm guest
	I0730 00:53:24.557819  525511 out.go:177] * [ha-161305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:53:24.559542  525511 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:53:24.559576  525511 notify.go:220] Checking for updates...
	I0730 00:53:24.561900  525511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:53:24.563216  525511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:53:24.564528  525511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:53:24.565732  525511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:53:24.566889  525511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:53:24.568642  525511 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:53:24.569329  525511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:24.569385  525511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:24.585045  525511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I0730 00:53:24.585526  525511 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:24.586042  525511 main.go:141] libmachine: Using API Version  1
	I0730 00:53:24.586069  525511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:24.586475  525511 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:24.586675  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.586963  525511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:53:24.587289  525511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:24.587331  525511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:24.603408  525511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I0730 00:53:24.603807  525511 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:24.604245  525511 main.go:141] libmachine: Using API Version  1
	I0730 00:53:24.604275  525511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:24.604591  525511 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:24.604834  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.645144  525511 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 00:53:24.646544  525511 start.go:297] selected driver: kvm2
	I0730 00:53:24.646559  525511 start.go:901] validating driver "kvm2" against &{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:53:24.646723  525511 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:53:24.647123  525511 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:53:24.647264  525511 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:53:24.664052  525511 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:53:24.665146  525511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 00:53:24.665207  525511 cni.go:84] Creating CNI manager for ""
	I0730 00:53:24.665216  525511 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 00:53:24.665312  525511 start.go:340] cluster config:
	{Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:53:24.665532  525511 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:53:24.668223  525511 out.go:177] * Starting "ha-161305" primary control-plane node in "ha-161305" cluster
	I0730 00:53:24.669523  525511 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:53:24.669567  525511 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:53:24.669580  525511 cache.go:56] Caching tarball of preloaded images
	I0730 00:53:24.669668  525511 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 00:53:24.669685  525511 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 00:53:24.669824  525511 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/config.json ...
	I0730 00:53:24.670032  525511 start.go:360] acquireMachinesLock for ha-161305: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 00:53:24.670077  525511 start.go:364] duration metric: took 25.78µs to acquireMachinesLock for "ha-161305"
	I0730 00:53:24.670097  525511 start.go:96] Skipping create...Using existing machine configuration
	I0730 00:53:24.670105  525511 fix.go:54] fixHost starting: 
	I0730 00:53:24.670379  525511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:53:24.670414  525511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:53:24.685596  525511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0730 00:53:24.685974  525511 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:53:24.686457  525511 main.go:141] libmachine: Using API Version  1
	I0730 00:53:24.686481  525511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:53:24.686816  525511 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:53:24.687042  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.687198  525511 main.go:141] libmachine: (ha-161305) Calling .GetState
	I0730 00:53:24.688977  525511 fix.go:112] recreateIfNeeded on ha-161305: state=Running err=<nil>
	W0730 00:53:24.689002  525511 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 00:53:24.690970  525511 out.go:177] * Updating the running kvm2 "ha-161305" VM ...
	I0730 00:53:24.692812  525511 machine.go:94] provisionDockerMachine start ...
	I0730 00:53:24.692923  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:53:24.693376  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:24.696498  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.697131  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:24.697161  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.697436  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:24.697678  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.697848  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.698012  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:24.698217  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:24.698464  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:24.698479  525511 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 00:53:24.812656  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:53:24.812690  525511 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:53:24.812950  525511 buildroot.go:166] provisioning hostname "ha-161305"
	I0730 00:53:24.812981  525511 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:53:24.813163  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:24.816882  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:24.817911  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.817938  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.817960  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:24.817988  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.818131  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.818437  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:24.818627  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:24.818824  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:24.818838  525511 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-161305 && echo "ha-161305" | sudo tee /etc/hostname
	I0730 00:53:24.947560  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-161305
	
	I0730 00:53:24.947607  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:24.950312  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.950649  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:24.950673  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:24.950905  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:24.951132  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.951323  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:24.951485  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:24.951635  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:24.951812  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:24.951833  525511 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-161305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-161305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-161305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 00:53:25.065657  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 00:53:25.065691  525511 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 00:53:25.065713  525511 buildroot.go:174] setting up certificates
	I0730 00:53:25.065722  525511 provision.go:84] configureAuth start
	I0730 00:53:25.065733  525511 main.go:141] libmachine: (ha-161305) Calling .GetMachineName
	I0730 00:53:25.066028  525511 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:53:25.068826  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.069221  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.069249  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.069369  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:25.071794  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.072164  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.072182  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.072358  525511 provision.go:143] copyHostCerts
	I0730 00:53:25.072393  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:53:25.072449  525511 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 00:53:25.072467  525511 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 00:53:25.072548  525511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 00:53:25.072644  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:53:25.072670  525511 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 00:53:25.072680  525511 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 00:53:25.072741  525511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 00:53:25.072806  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:53:25.072829  525511 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 00:53:25.072837  525511 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 00:53:25.072871  525511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 00:53:25.072935  525511 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.ha-161305 san=[127.0.0.1 192.168.39.80 ha-161305 localhost minikube]
	I0730 00:53:25.177489  525511 provision.go:177] copyRemoteCerts
	I0730 00:53:25.177553  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 00:53:25.177581  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:25.180165  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.180483  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.180508  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.180664  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:25.180898  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:25.181098  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:25.181238  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:53:25.269971  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 00:53:25.270052  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 00:53:25.297957  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 00:53:25.298039  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0730 00:53:25.326664  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 00:53:25.326742  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 00:53:25.358995  525511 provision.go:87] duration metric: took 293.253296ms to configureAuth
	I0730 00:53:25.359043  525511 buildroot.go:189] setting minikube options for container-runtime
	I0730 00:53:25.359333  525511 config.go:182] Loaded profile config "ha-161305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:53:25.359428  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:53:25.362624  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.362977  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:53:25.363010  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:53:25.363235  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:53:25.363425  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:25.363611  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:53:25.363758  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:53:25.363957  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:53:25.364173  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:53:25.364196  525511 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 00:55:00.012597  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 00:55:00.012633  525511 machine.go:97] duration metric: took 1m35.319747222s to provisionDockerMachine
	I0730 00:55:00.012652  525511 start.go:293] postStartSetup for "ha-161305" (driver="kvm2")
	I0730 00:55:00.012664  525511 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 00:55:00.012682  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.013103  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 00:55:00.013145  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.016233  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.016721  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.016750  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.016885  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.017080  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.017226  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.017342  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:55:00.105272  525511 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 00:55:00.109316  525511 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 00:55:00.109340  525511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 00:55:00.109417  525511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 00:55:00.109505  525511 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 00:55:00.109543  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 00:55:00.109648  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 00:55:00.118780  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:55:00.141382  525511 start.go:296] duration metric: took 128.715235ms for postStartSetup
	I0730 00:55:00.141427  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.141764  525511 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0730 00:55:00.141793  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.144560  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.144920  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.144948  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.145076  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.145348  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.145569  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.145736  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	W0730 00:55:00.230974  525511 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0730 00:55:00.231006  525511 fix.go:56] duration metric: took 1m35.560901937s for fixHost
	I0730 00:55:00.231030  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.233657  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.234048  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.234079  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.234261  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.234493  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.234670  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.234819  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.234984  525511 main.go:141] libmachine: Using SSH client type: native
	I0730 00:55:00.235158  525511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0730 00:55:00.235171  525511 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 00:55:00.356275  525511 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722300900.314749906
	
	I0730 00:55:00.356300  525511 fix.go:216] guest clock: 1722300900.314749906
	I0730 00:55:00.356315  525511 fix.go:229] Guest: 2024-07-30 00:55:00.314749906 +0000 UTC Remote: 2024-07-30 00:55:00.231014598 +0000 UTC m=+95.711873281 (delta=83.735308ms)
	I0730 00:55:00.356335  525511 fix.go:200] guest clock delta is within tolerance: 83.735308ms
	I0730 00:55:00.356344  525511 start.go:83] releasing machines lock for "ha-161305", held for 1m35.686252359s
	I0730 00:55:00.356363  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.356625  525511 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:55:00.359078  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.359473  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.359504  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.359653  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.360161  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.360362  525511 main.go:141] libmachine: (ha-161305) Calling .DriverName
	I0730 00:55:00.360447  525511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 00:55:00.360494  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.360571  525511 ssh_runner.go:195] Run: cat /version.json
	I0730 00:55:00.360597  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHHostname
	I0730 00:55:00.363066  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363096  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363443  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.363470  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363516  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:00.363548  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:00.363579  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.363761  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.363763  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHPort
	I0730 00:55:00.363942  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHKeyPath
	I0730 00:55:00.363952  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.364111  525511 main.go:141] libmachine: (ha-161305) Calling .GetSSHUsername
	I0730 00:55:00.364137  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:55:00.364223  525511 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/ha-161305/id_rsa Username:docker}
	I0730 00:55:00.474022  525511 ssh_runner.go:195] Run: systemctl --version
	I0730 00:55:00.479834  525511 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 00:55:00.640248  525511 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 00:55:00.645812  525511 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 00:55:00.645890  525511 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 00:55:00.656663  525511 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 00:55:00.656690  525511 start.go:495] detecting cgroup driver to use...
	I0730 00:55:00.656776  525511 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 00:55:00.674989  525511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 00:55:00.689756  525511 docker.go:217] disabling cri-docker service (if available) ...
	I0730 00:55:00.689830  525511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 00:55:00.705809  525511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 00:55:00.718691  525511 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 00:55:00.864576  525511 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 00:55:01.008399  525511 docker.go:233] disabling docker service ...
	I0730 00:55:01.008474  525511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 00:55:01.025983  525511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 00:55:01.039534  525511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 00:55:01.185964  525511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 00:55:01.337949  525511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 00:55:01.352151  525511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 00:55:01.369681  525511 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 00:55:01.369798  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.381708  525511 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 00:55:01.381776  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.392651  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.402925  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.413191  525511 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 00:55:01.423307  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.433503  525511 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.443627  525511 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 00:55:01.455179  525511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 00:55:01.465785  525511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 00:55:01.475085  525511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:55:01.645738  525511 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 00:55:12.549070  525511 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.903148846s)
	I0730 00:55:12.549114  525511 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 00:55:12.549164  525511 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 00:55:12.557670  525511 start.go:563] Will wait 60s for crictl version
	I0730 00:55:12.557741  525511 ssh_runner.go:195] Run: which crictl
	I0730 00:55:12.563399  525511 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 00:55:12.611536  525511 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 00:55:12.611636  525511 ssh_runner.go:195] Run: crio --version
	I0730 00:55:12.641392  525511 ssh_runner.go:195] Run: crio --version
	I0730 00:55:12.670857  525511 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 00:55:12.672180  525511 main.go:141] libmachine: (ha-161305) Calling .GetIP
	I0730 00:55:12.675012  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:12.675537  525511 main.go:141] libmachine: (ha-161305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:58:6f", ip: ""} in network mk-ha-161305: {Iface:virbr1 ExpiryTime:2024-07-30 01:36:42 +0000 UTC Type:0 Mac:52:54:00:11:58:6f Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-161305 Clientid:01:52:54:00:11:58:6f}
	I0730 00:55:12.675568  525511 main.go:141] libmachine: (ha-161305) DBG | domain ha-161305 has defined IP address 192.168.39.80 and MAC address 52:54:00:11:58:6f in network mk-ha-161305
	I0730 00:55:12.675804  525511 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 00:55:12.680572  525511 kubeadm.go:883] updating cluster {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 00:55:12.680750  525511 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:55:12.680809  525511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:55:12.715050  525511 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:55:12.715075  525511 crio.go:433] Images already preloaded, skipping extraction
	I0730 00:55:12.715126  525511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 00:55:12.748383  525511 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 00:55:12.748404  525511 cache_images.go:84] Images are preloaded, skipping loading
	I0730 00:55:12.748413  525511 kubeadm.go:934] updating node { 192.168.39.80 8443 v1.30.3 crio true true} ...
	I0730 00:55:12.748532  525511 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-161305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 00:55:12.748615  525511 ssh_runner.go:195] Run: crio config
	I0730 00:55:12.798070  525511 cni.go:84] Creating CNI manager for ""
	I0730 00:55:12.798092  525511 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 00:55:12.798105  525511 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 00:55:12.798138  525511 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-161305 NodeName:ha-161305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 00:55:12.798297  525511 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-161305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 00:55:12.798318  525511 kube-vip.go:115] generating kube-vip config ...
	I0730 00:55:12.798371  525511 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0730 00:55:12.809596  525511 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 00:55:12.809705  525511 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 00:55:12.809765  525511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 00:55:12.818946  525511 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 00:55:12.819023  525511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0730 00:55:12.828397  525511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0730 00:55:12.846279  525511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 00:55:12.862615  525511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0730 00:55:12.880863  525511 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 00:55:12.900902  525511 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0730 00:55:12.905187  525511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 00:55:13.076422  525511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 00:55:13.092120  525511 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305 for IP: 192.168.39.80
	I0730 00:55:13.092150  525511 certs.go:194] generating shared ca certs ...
	I0730 00:55:13.092167  525511 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:55:13.092368  525511 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 00:55:13.092426  525511 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 00:55:13.092441  525511 certs.go:256] generating profile certs ...
	I0730 00:55:13.092530  525511 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/client.key
	I0730 00:55:13.092570  525511 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999
	I0730 00:55:13.092592  525511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80 192.168.39.126 192.168.39.254]
	I0730 00:55:13.301529  525511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999 ...
	I0730 00:55:13.301566  525511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999: {Name:mkc5ee9b03045e35981247f1df2f286af7fa675d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:55:13.301783  525511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999 ...
	I0730 00:55:13.301804  525511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999: {Name:mk809d5c6053b0224c61787ff8e636f5cf9f72dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:55:13.301918  525511 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt.beb08999 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt
	I0730 00:55:13.302135  525511 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key.beb08999 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key
	I0730 00:55:13.302340  525511 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key
	I0730 00:55:13.302363  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 00:55:13.302385  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 00:55:13.302405  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 00:55:13.302422  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 00:55:13.302453  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 00:55:13.302480  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 00:55:13.302505  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 00:55:13.302522  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 00:55:13.302591  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 00:55:13.302636  525511 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 00:55:13.302649  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 00:55:13.302680  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 00:55:13.302709  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 00:55:13.302740  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 00:55:13.302792  525511 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 00:55:13.302833  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.302856  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.302874  525511 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.303493  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 00:55:13.328830  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 00:55:13.351713  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 00:55:13.373851  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 00:55:13.396943  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0730 00:55:13.420988  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 00:55:13.443038  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 00:55:13.465032  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/ha-161305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 00:55:13.486886  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 00:55:13.508579  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 00:55:13.530494  525511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 00:55:13.553169  525511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 00:55:13.568444  525511 ssh_runner.go:195] Run: openssl version
	I0730 00:55:13.574001  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 00:55:13.584034  525511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.588048  525511 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.588102  525511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 00:55:13.593397  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 00:55:13.602121  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 00:55:13.612074  525511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.616277  525511 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.616319  525511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 00:55:13.621869  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 00:55:13.630685  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 00:55:13.640562  525511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.644467  525511 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.644527  525511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 00:55:13.649871  525511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 00:55:13.658539  525511 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 00:55:13.662629  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 00:55:13.667827  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 00:55:13.672950  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 00:55:13.678422  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 00:55:13.683868  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 00:55:13.689116  525511 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 00:55:13.694253  525511 kubeadm.go:392] StartCluster: {Name:ha-161305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-161305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.126 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:55:13.694374  525511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 00:55:13.694428  525511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 00:55:13.731019  525511 cri.go:89] found id: "571f739c3ec7aed9fec7669919c5c5363b02d94d86b661561b74e7c197b8d9cb"
	I0730 00:55:13.731043  525511 cri.go:89] found id: "3034a674ef2bd59ba46dae2122e4b5868166e8cdae4b6515904f3c9d1950efd7"
	I0730 00:55:13.731047  525511 cri.go:89] found id: "beb8a63139cdb51537bae82b35e83166548dd1dcd7e9b7a273752f084b07c6be"
	I0730 00:55:13.731050  525511 cri.go:89] found id: "dbeddb236c6c540068985404a523e51a93465516f8f64705638bf85d891d327d"
	I0730 00:55:13.731053  525511 cri.go:89] found id: "eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94"
	I0730 00:55:13.731056  525511 cri.go:89] found id: "3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f"
	I0730 00:55:13.731059  525511 cri.go:89] found id: "225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866"
	I0730 00:55:13.731061  525511 cri.go:89] found id: "a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5"
	I0730 00:55:13.731064  525511 cri.go:89] found id: "e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b"
	I0730 00:55:13.731070  525511 cri.go:89] found id: "3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e"
	I0730 00:55:13.731072  525511 cri.go:89] found id: "090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd"
	I0730 00:55:13.731075  525511 cri.go:89] found id: "e11b91a20a338c609c9f570bffe0fa7bc3d6e1177326362263f0c5b6c0916e8b"
	I0730 00:55:13.731081  525511 cri.go:89] found id: "3b13100aa8cf34a6b7fbc2b9f918e394b83d5ae29946844d6e828698be974a55"
	I0730 00:55:13.731084  525511 cri.go:89] found id: "2b2f636edadaa437a64e08b7d84679c68e85c0ee923df11ce1e6c38f0061af81"
	I0730 00:55:13.731089  525511 cri.go:89] found id: "f6480acdda7d51a0798a4f5fcf49f59d138a6bf26a3f14389f8af4d5005fc34b"
	I0730 00:55:13.731091  525511 cri.go:89] found id: "625a67c138c38cb88970b5fade0900c46c35d090ab77f5ba20d9886076f35cc0"
	I0730 00:55:13.731094  525511 cri.go:89] found id: "1805553d07226f5b62f51eb524fd47ba91183380561c046cdc743997a44edec2"
	I0730 00:55:13.731099  525511 cri.go:89] found id: "a2084c91812922f1e7b32d0c4c7b59021ceff0f9824b9c7ca98dbf1cf98db1cb"
	I0730 00:55:13.731102  525511 cri.go:89] found id: "16a5f7eb1118e73068798d5f7504a2f0fcadae5156dbc22a9bb584a1ae42ba12"
	I0730 00:55:13.731105  525511 cri.go:89] found id: ""
	I0730 00:55:13.731150  525511 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.642460046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722301588642437127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ded073d-1778-4383-baae-02b1a5217c6a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.643239976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05bbac29-6729-45b5-b5e0-d9b255367a3e name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.643295639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05bbac29-6729-45b5-b5e0-d9b255367a3e name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.644414920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14d65be0e64407c439529243ec5f9c6ca7a32b75d5894af18e6a6819b77a345d,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722301384365266882,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb7cbfbcdf21197cd412ccdfdfa61563a708989bcb0c5cb5a4aaa2069c2f041,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722301096372461906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79feee35b5d27b3f038bed0602fb04956d627055287b8a01c0a5d5c83ee67ce7,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722301086371015960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6f293d623755763934ee2832fc59acd26c230d0966ef668ca5713a09e87d1c,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301083366537652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d936e5ce1b0fe00843c89425e15c6948c485267a3e227326e712a02d879064,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300985365482338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db133561952508e903f498ac93deedc933d61c6f113f5d0e246ff051dea1320,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300984367595695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778cc39e675bd32c5b1b23cd322cf3aa7850d63268a66a326aba571fd21bf2aa,PodSandboxId:aac3f7e2d270b4df1805f1207b948b8581996ae1b5acab6d41391de9d1e31a26,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300949650017766,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d017f6e935e3668f55e7cc5831041ed6c5d1ee1fcbdf8114b39626fb64a735b,PodSandboxId:20459c01724d4979b12b823e0013e3b45127db07e6cfe5d5b6ce6ecc8c945ec7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300916737683150,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:8f58ee5417f0f7c5e891fdb31ef8252e34171e866e561cc2e26be3e7d87510c4,PodSandboxId:fdad66576b061fb683d82b57ddebeab6d4850af008273251a82938e42ec3414f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300916503240260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:f3c56d8ba180012e211971649c57ec997e79a7e48d32f1b76c0fcb1a89f96a35,PodSandboxId:d3ecc3cd9efbbc1a3b9d7b0967ba72257bb4631aaf9bfeafb2d5db9cc47edfe0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300916645885742,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51bb11fb995e3309da082654f
3ac0db689b2e431e5d151595490fd73b7618512,PodSandboxId:72f159ee30fc724cd074dbd5bad1086f912686fffa8ad212bc7980147bb9d27e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916421295372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7838f0d734184c8def1260d2daf1e30f37e11d3985be8ed2eb962f97f0c6a683,PodSandboxId:f56a01ef92ba517c1065a70f3ff5f3f61781d3bde8e475e84a0660c09f654f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300916497792280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d496d1fd2d4cd95da89fab8e7b3bbf4bcef78aa0a5ee8ab4aeca419c6eec71,PodSandboxId:2087e193a668ba58ee42d1fed83e4255927fdbdb8b142e3e624d343c805f2d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916395084215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707e73406ffad3fcd2b18c53714531516a2fd37c1fffde83e70824c6c425b072,PodSandboxId:aae676e87b0d2b9033dda960e5eeb265cb46d891f43a8e81420b95a2bc88deca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300916126170523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300539617158324,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722300520079128815,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722300506677832236,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722300506475549836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506533899557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506420876258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14
aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722300506356525524,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722300506248456670,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05bbac29-6729-45b5-b5e0-d9b255367a3e name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.686835117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5fe6c0e-ae51-40c3-b4eb-349f70c467ea name=/runtime.v1.RuntimeService/Version
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.686910301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5fe6c0e-ae51-40c3-b4eb-349f70c467ea name=/runtime.v1.RuntimeService/Version
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.687892301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c40f890a-9d9f-48bd-aed9-63cf2db93f14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.688411783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722301588688385761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c40f890a-9d9f-48bd-aed9-63cf2db93f14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.688831103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2b5c8f4-5361-40c9-8b80-1de6aacdcfe6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.688916592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2b5c8f4-5361-40c9-8b80-1de6aacdcfe6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.689391618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14d65be0e64407c439529243ec5f9c6ca7a32b75d5894af18e6a6819b77a345d,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722301384365266882,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb7cbfbcdf21197cd412ccdfdfa61563a708989bcb0c5cb5a4aaa2069c2f041,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722301096372461906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79feee35b5d27b3f038bed0602fb04956d627055287b8a01c0a5d5c83ee67ce7,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722301086371015960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6f293d623755763934ee2832fc59acd26c230d0966ef668ca5713a09e87d1c,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301083366537652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d936e5ce1b0fe00843c89425e15c6948c485267a3e227326e712a02d879064,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300985365482338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db133561952508e903f498ac93deedc933d61c6f113f5d0e246ff051dea1320,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300984367595695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778cc39e675bd32c5b1b23cd322cf3aa7850d63268a66a326aba571fd21bf2aa,PodSandboxId:aac3f7e2d270b4df1805f1207b948b8581996ae1b5acab6d41391de9d1e31a26,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300949650017766,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d017f6e935e3668f55e7cc5831041ed6c5d1ee1fcbdf8114b39626fb64a735b,PodSandboxId:20459c01724d4979b12b823e0013e3b45127db07e6cfe5d5b6ce6ecc8c945ec7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300916737683150,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:8f58ee5417f0f7c5e891fdb31ef8252e34171e866e561cc2e26be3e7d87510c4,PodSandboxId:fdad66576b061fb683d82b57ddebeab6d4850af008273251a82938e42ec3414f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300916503240260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:f3c56d8ba180012e211971649c57ec997e79a7e48d32f1b76c0fcb1a89f96a35,PodSandboxId:d3ecc3cd9efbbc1a3b9d7b0967ba72257bb4631aaf9bfeafb2d5db9cc47edfe0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300916645885742,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51bb11fb995e3309da082654f
3ac0db689b2e431e5d151595490fd73b7618512,PodSandboxId:72f159ee30fc724cd074dbd5bad1086f912686fffa8ad212bc7980147bb9d27e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916421295372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7838f0d734184c8def1260d2daf1e30f37e11d3985be8ed2eb962f97f0c6a683,PodSandboxId:f56a01ef92ba517c1065a70f3ff5f3f61781d3bde8e475e84a0660c09f654f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300916497792280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d496d1fd2d4cd95da89fab8e7b3bbf4bcef78aa0a5ee8ab4aeca419c6eec71,PodSandboxId:2087e193a668ba58ee42d1fed83e4255927fdbdb8b142e3e624d343c805f2d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916395084215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707e73406ffad3fcd2b18c53714531516a2fd37c1fffde83e70824c6c425b072,PodSandboxId:aae676e87b0d2b9033dda960e5eeb265cb46d891f43a8e81420b95a2bc88deca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300916126170523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300539617158324,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722300520079128815,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722300506677832236,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722300506475549836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506533899557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506420876258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14
aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722300506356525524,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722300506248456670,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2b5c8f4-5361-40c9-8b80-1de6aacdcfe6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.729415274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68faa00b-852f-401e-9c71-f3c839e5dff3 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.729499725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68faa00b-852f-401e-9c71-f3c839e5dff3 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.730430813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19a8b7a8-ab1c-474d-9b7b-82b819e84116 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.730856422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722301588730832686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19a8b7a8-ab1c-474d-9b7b-82b819e84116 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.731565616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a00e03d4-09fd-4e8f-9ef8-06df62a2a73c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.731653257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a00e03d4-09fd-4e8f-9ef8-06df62a2a73c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.732247577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14d65be0e64407c439529243ec5f9c6ca7a32b75d5894af18e6a6819b77a345d,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722301384365266882,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb7cbfbcdf21197cd412ccdfdfa61563a708989bcb0c5cb5a4aaa2069c2f041,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722301096372461906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79feee35b5d27b3f038bed0602fb04956d627055287b8a01c0a5d5c83ee67ce7,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722301086371015960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6f293d623755763934ee2832fc59acd26c230d0966ef668ca5713a09e87d1c,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301083366537652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d936e5ce1b0fe00843c89425e15c6948c485267a3e227326e712a02d879064,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300985365482338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db133561952508e903f498ac93deedc933d61c6f113f5d0e246ff051dea1320,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300984367595695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778cc39e675bd32c5b1b23cd322cf3aa7850d63268a66a326aba571fd21bf2aa,PodSandboxId:aac3f7e2d270b4df1805f1207b948b8581996ae1b5acab6d41391de9d1e31a26,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300949650017766,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d017f6e935e3668f55e7cc5831041ed6c5d1ee1fcbdf8114b39626fb64a735b,PodSandboxId:20459c01724d4979b12b823e0013e3b45127db07e6cfe5d5b6ce6ecc8c945ec7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300916737683150,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:8f58ee5417f0f7c5e891fdb31ef8252e34171e866e561cc2e26be3e7d87510c4,PodSandboxId:fdad66576b061fb683d82b57ddebeab6d4850af008273251a82938e42ec3414f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300916503240260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:f3c56d8ba180012e211971649c57ec997e79a7e48d32f1b76c0fcb1a89f96a35,PodSandboxId:d3ecc3cd9efbbc1a3b9d7b0967ba72257bb4631aaf9bfeafb2d5db9cc47edfe0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300916645885742,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51bb11fb995e3309da082654f
3ac0db689b2e431e5d151595490fd73b7618512,PodSandboxId:72f159ee30fc724cd074dbd5bad1086f912686fffa8ad212bc7980147bb9d27e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916421295372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7838f0d734184c8def1260d2daf1e30f37e11d3985be8ed2eb962f97f0c6a683,PodSandboxId:f56a01ef92ba517c1065a70f3ff5f3f61781d3bde8e475e84a0660c09f654f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300916497792280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d496d1fd2d4cd95da89fab8e7b3bbf4bcef78aa0a5ee8ab4aeca419c6eec71,PodSandboxId:2087e193a668ba58ee42d1fed83e4255927fdbdb8b142e3e624d343c805f2d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916395084215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707e73406ffad3fcd2b18c53714531516a2fd37c1fffde83e70824c6c425b072,PodSandboxId:aae676e87b0d2b9033dda960e5eeb265cb46d891f43a8e81420b95a2bc88deca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300916126170523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300539617158324,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722300520079128815,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722300506677832236,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722300506475549836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506533899557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506420876258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14
aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722300506356525524,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722300506248456670,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a00e03d4-09fd-4e8f-9ef8-06df62a2a73c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.775207680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e454e091-3c66-459c-b9b0-ac8946f69c83 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.775329222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e454e091-3c66-459c-b9b0-ac8946f69c83 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.777204326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b21c2333-9f5c-4206-8f6c-8ef39585bb9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.777920370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722301588777894358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b21c2333-9f5c-4206-8f6c-8ef39585bb9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.778422003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19bc5478-c827-40c6-a6b2-1916c0a3dbc2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.778480248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19bc5478-c827-40c6-a6b2-1916c0a3dbc2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:06:28 ha-161305 crio[6378]: time="2024-07-30 01:06:28.779003674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14d65be0e64407c439529243ec5f9c6ca7a32b75d5894af18e6a6819b77a345d,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:7,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722301384365266882,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 7,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb7cbfbcdf21197cd412ccdfdfa61563a708989bcb0c5cb5a4aaa2069c2f041,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722301096372461906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79feee35b5d27b3f038bed0602fb04956d627055287b8a01c0a5d5c83ee67ce7,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722301086371015960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6f293d623755763934ee2832fc59acd26c230d0966ef668ca5713a09e87d1c,PodSandboxId:c1fd3ab52899bc629bb3fd7ee7957b9c6ee7b1d57e118642d9f7dd692e6072d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301083366537652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75260b22-5ffc-4848-8c70-5b9cb3f010bf,},Annotations:map[string]string{io.kubernetes.container.hash: 27a85968,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d936e5ce1b0fe00843c89425e15c6948c485267a3e227326e712a02d879064,PodSandboxId:1e93ecf25ea120a106bd1d696c57afc03adf3cf355849983a03d9817aebdb555,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722300985365482338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 139678a0c09914387156e9653bed8a57,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db133561952508e903f498ac93deedc933d61c6f113f5d0e246ff051dea1320,PodSandboxId:904f92fcdb16b536c22321581a25680f1f54570ae011063db666af4357e65d80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722300984367595695,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e78fc87ed9d024ac0fe2effd95cda2d8,},Annotations:map[string]string{io.kubernetes.container.hash: ae21d80f,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778cc39e675bd32c5b1b23cd322cf3aa7850d63268a66a326aba571fd21bf2aa,PodSandboxId:aac3f7e2d270b4df1805f1207b948b8581996ae1b5acab6d41391de9d1e31a26,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722300949650017766,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kubernetes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d017f6e935e3668f55e7cc5831041ed6c5d1ee1fcbdf8114b39626fb64a735b,PodSandboxId:20459c01724d4979b12b823e0013e3b45127db07e6cfe5d5b6ce6ecc8c945ec7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722300916737683150,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:8f58ee5417f0f7c5e891fdb31ef8252e34171e866e561cc2e26be3e7d87510c4,PodSandboxId:fdad66576b061fb683d82b57ddebeab6d4850af008273251a82938e42ec3414f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722300916503240260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:f3c56d8ba180012e211971649c57ec997e79a7e48d32f1b76c0fcb1a89f96a35,PodSandboxId:d3ecc3cd9efbbc1a3b9d7b0967ba72257bb4631aaf9bfeafb2d5db9cc47edfe0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722300916645885742,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51bb11fb995e3309da082654f
3ac0db689b2e431e5d151595490fd73b7618512,PodSandboxId:72f159ee30fc724cd074dbd5bad1086f912686fffa8ad212bc7980147bb9d27e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916421295372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7838f0d734184c8def1260d2daf1e30f37e11d3985be8ed2eb962f97f0c6a683,PodSandboxId:f56a01ef92ba517c1065a70f3ff5f3f61781d3bde8e475e84a0660c09f654f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722300916497792280,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d496d1fd2d4cd95da89fab8e7b3bbf4bcef78aa0a5ee8ab4aeca419c6eec71,PodSandboxId:2087e193a668ba58ee42d1fed83e4255927fdbdb8b142e3e624d343c805f2d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722300916395084215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707e73406ffad3fcd2b18c53714531516a2fd37c1fffde83e70824c6c425b072,PodSandboxId:aae676e87b0d2b9033dda960e5eeb265cb46d891f43a8e81420b95a2bc88deca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722300916126170523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37637e74a1f33f7e17d19f7c696c67bf339845d5f7c3e6d6f106697b82d943e0,PodSandboxId:45a56eb6f8ca1ff33c8267a16ce3f94299b3beee0f70961d937bc58f41988a3e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722300539617158324,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ttjx8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 93297df5-25c9-4722-8f86-668316a3d005,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4e1f3459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eca65a5f97abc9f75e48031d3900fc9ef26a6f352fbb867dcfb1a4faf8bede94,PodSandboxId:f2cde2eb18016084a2908910b7a988e12a7d93b79ca396bcccb4b2bfea0ab446,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1722300520079128815,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98cc2f4e3fa5d2b9b450a9e8e1bc531,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f,PodSandboxId:62603cd489d837ad252d1307911c7a999cd1f9731a0a296f57e7b7319b52d936,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722300506677832236,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wptvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1733d06b-6eb7-4dd5-9349-b727cc05e797,},Annotations:map[string]string{io.kubernetes.container.hash: ad907a0f,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5,PodSandboxId:3452972572a3bcc9dd6fdfa7f3e543266947fb3f91db011621d927189ca34671,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722300506475549836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zrzxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3745faa8-044d-4923-8a49-c21a0332e208,},Annotations:map[string]string{io.kubernetes.container.hash: 3000e9fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866,PodSandboxId:a7a7848979d5daf641e2f99e4a4f6b61eded02b1752418c44fdf3c58eee40b75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506533899557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mzcln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cab12f67-38e0-41f7-8414-120064dca1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2229d6c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b,PodSandboxId:14b01800078de5dcbab617e5dc7a8b3910ff32377a5ae929ffb5da99830efac4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722300506420876258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bdpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1470c5-85f4-4dfa-84c0-14
aa6c713e73,},Annotations:map[string]string{io.kubernetes.container.hash: 76b432cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e,PodSandboxId:5937bdc3a20dceff23019204d7b968848eabbb858213a8eb6525255103f90bb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722300506356525524,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d18c18869abbb97793407467ebdef17,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd,PodSandboxId:9818b8693e1bc7d27df78383bbb70e56a425cc3636a812d8a0a9449024c67390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722300506248456670,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-161305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd41dd340ce6d6e863fbe359a241ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 97bba51c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19bc5478-c827-40c6-a6b2-1916c0a3dbc2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	14d65be0e6440       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 minutes ago       Running             storage-provisioner       7                   c1fd3ab52899b       storage-provisioner
	7fb7cbfbcdf21       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   8 minutes ago       Running             kube-controller-manager   5                   1e93ecf25ea12       kube-controller-manager-ha-161305
	79feee35b5d27       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   8 minutes ago       Running             kube-apiserver            6                   904f92fcdb16b       kube-apiserver-ha-161305
	4b6f293d62375       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   8 minutes ago       Exited              storage-provisioner       6                   c1fd3ab52899b       storage-provisioner
	c6d936e5ce1b0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   10 minutes ago      Exited              kube-controller-manager   4                   1e93ecf25ea12       kube-controller-manager-ha-161305
	4db1335619525       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   10 minutes ago      Exited              kube-apiserver            5                   904f92fcdb16b       kube-apiserver-ha-161305
	778cc39e675bd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   10 minutes ago      Running             busybox                   2                   aac3f7e2d270b       busybox-fc5497c4f-ttjx8
	6d017f6e935e3       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   11 minutes ago      Running             kube-vip                  1                   20459c01724d4       kube-vip-ha-161305
	f3c56d8ba1800       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   11 minutes ago      Running             kindnet-cni               2                   d3ecc3cd9efbb       kindnet-zrzxf
	8f58ee5417f0f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   11 minutes ago      Running             kube-proxy                2                   fdad66576b061       kube-proxy-wptvn
	7838f0d734184       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   11 minutes ago      Running             etcd                      2                   f56a01ef92ba5       etcd-ha-161305
	51bb11fb995e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   11 minutes ago      Running             coredns                   2                   72f159ee30fc7       coredns-7db6d8ff4d-bdpds
	80d496d1fd2d4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   11 minutes ago      Running             coredns                   2                   2087e193a668b       coredns-7db6d8ff4d-mzcln
	707e73406ffad       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   11 minutes ago      Running             kube-scheduler            2                   aae676e87b0d2       kube-scheduler-ha-161305
	37637e74a1f33       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago      Exited              busybox                   1                   45a56eb6f8ca1       busybox-fc5497c4f-ttjx8
	eca65a5f97abc       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   17 minutes ago      Exited              kube-vip                  0                   f2cde2eb18016       kube-vip-ha-161305
	3794d8da6d031       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   18 minutes ago      Exited              kube-proxy                1                   62603cd489d83       kube-proxy-wptvn
	225f65c04aecc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Exited              coredns                   1                   a7a7848979d5d       coredns-7db6d8ff4d-mzcln
	a4940cda3f54a       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46   18 minutes ago      Exited              kindnet-cni               1                   3452972572a3b       kindnet-zrzxf
	e7edc1afdc01a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Exited              coredns                   1                   14b01800078de       coredns-7db6d8ff4d-bdpds
	3ab677666e42b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   18 minutes ago      Exited              kube-scheduler            1                   5937bdc3a20dc       kube-scheduler-ha-161305
	090db2af84793       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Exited              etcd                      1                   9818b8693e1bc       etcd-ha-161305
	
	
	==> coredns [225f65c04aecc730ddebca4bc948379c579f2414dba20db6c73b9f7dc5e82866] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2075708608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 00:48:38.336) (total time: 10441ms):
	Trace[2075708608]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer 10441ms (00:48:48.777)
	Trace[2075708608]: [10.4414144s] [10.4414144s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:57474->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [51bb11fb995e3309da082654f3ac0db689b2e431e5d151595490fd73b7618512] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [80d496d1fd2d4cd95da89fab8e7b3bbf4bcef78aa0a5ee8ab4aeca419c6eec71] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1699383355]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 00:58:17.619) (total time: 10001ms):
	Trace[1699383355]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:58:27.621)
	Trace[1699383355]: [10.001526927s] [10.001526927s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e7edc1afdc01a6082e9f8077381b2a2d79679f920af3891ca4530dc5308d0b3b] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[809198444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 00:48:38.223) (total time: 13405ms):
	Trace[809198444]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer 13405ms (00:48:51.628)
	Trace[809198444]: [13.405604121s] [13.405604121s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47084->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47108->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47108->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-161305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T00_37_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:37:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:06:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:03:35 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:03:35 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:03:35 +0000   Tue, 30 Jul 2024 00:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:03:35 +0000   Tue, 30 Jul 2024 00:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-161305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee5b503318a04d5fa9f6151b095f43f6
	  System UUID:                ee5b5033-18a0-4d5f-a9f6-151b095f43f6
	  Boot ID:                    c41944eb-218c-41cb-bf89-ac90ba0a8709
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ttjx8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 coredns-7db6d8ff4d-bdpds             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 coredns-7db6d8ff4d-mzcln             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-ha-161305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kindnet-zrzxf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29m
	  kube-system                 kube-apiserver-ha-161305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-ha-161305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-wptvn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-ha-161305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-vip-ha-161305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 29m                kube-proxy       
	  Normal   NodeHasSufficientMemory  29m                kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     29m                kubelet          Node ha-161305 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29m                kubelet          Node ha-161305 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 29m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  29m                kubelet          Node ha-161305 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           29m                node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   NodeReady                28m                kubelet          Node ha-161305 status is now: NodeReady
	  Normal   RegisteredNode           27m                node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           26m                node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	  Warning  ContainerGCFailed        11m (x4 over 19m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           7m49s              node-controller  Node ha-161305 event: Registered Node ha-161305 in Controller
	
	
	Name:               ha-161305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_38_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:38:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:06:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:03:38 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:03:38 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:03:38 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:03:38 +0000   Tue, 30 Jul 2024 00:49:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-161305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a157fd7e5c14479d97024c5548311976
	  System UUID:                a157fd7e-5c14-479d-9702-4c5548311976
	  Boot ID:                    4f645d45-ff44-451d-986c-85a804baaea9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v2pq7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-ha-161305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-dj7v2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-161305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-161305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-pqr2f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-161305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-161305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28m                kube-proxy       
	  Normal   Starting                 8m1s               kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           28m                node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal   RegisteredNode           27m                node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal   RegisteredNode           26m                node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal   NodeNotReady             24m                node-controller  Node ha-161305-m02 status is now: NodeNotReady
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-161305-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node ha-161305-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	  Warning  ContainerGCFailed        10m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           7m49s              node-controller  Node ha-161305-m02 event: Registered Node ha-161305-m02 in Controller
	
	
	Name:               ha-161305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-161305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=ha-161305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T00_40_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 00:40:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-161305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 00:50:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 30 Jul 2024 00:50:35 +0000   Tue, 30 Jul 2024 00:51:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-161305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16981c9b42447afa5527547ca393cc7
	  System UUID:                b16981c9-b424-47af-a552-7547ca393cc7
	  Boot ID:                    cd17f5a2-30ac-44ae-8c6d-bf637a282fdf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7sdnf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-bdl2h              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-proxy-f9bfb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasSufficientPID     25m (x2 over 25m)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  25m (x2 over 25m)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           25m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   NodeReady                25m                kubelet          Node ha-161305-m04 status is now: NodeReady
	  Normal   RegisteredNode           17m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m (x3 over 15m)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x3 over 15m)  kubelet          Node ha-161305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x3 over 15m)  kubelet          Node ha-161305-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 15m (x2 over 15m)  kubelet          Node ha-161305-m04 has been rebooted, boot id: cd17f5a2-30ac-44ae-8c6d-bf637a282fdf
	  Normal   NodeReady                15m (x2 over 15m)  kubelet          Node ha-161305-m04 status is now: NodeReady
	  Normal   NodeNotReady             14m (x2 over 16m)  node-controller  Node ha-161305-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           7m49s              node-controller  Node ha-161305-m04 event: Registered Node ha-161305-m04 in Controller
	
	
	==> dmesg <==
	[  +6.953682] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.085875] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.685156] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.526010] kauditd_printk_skb: 38 callbacks suppressed
	[Jul30 00:38] kauditd_printk_skb: 26 callbacks suppressed
	[Jul30 00:48] systemd-fstab-generator[3667]: Ignoring "noauto" option for root device
	[  +0.145039] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.168770] systemd-fstab-generator[3693]: Ignoring "noauto" option for root device
	[  +0.148265] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.269338] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +9.072086] systemd-fstab-generator[3835]: Ignoring "noauto" option for root device
	[  +0.089847] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.952407] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.176537] kauditd_printk_skb: 97 callbacks suppressed
	[ +28.601116] kauditd_printk_skb: 1 callbacks suppressed
	[Jul30 00:55] systemd-fstab-generator[6287]: Ignoring "noauto" option for root device
	[  +0.144722] systemd-fstab-generator[6299]: Ignoring "noauto" option for root device
	[  +0.170089] systemd-fstab-generator[6313]: Ignoring "noauto" option for root device
	[  +0.153135] systemd-fstab-generator[6325]: Ignoring "noauto" option for root device
	[  +0.301593] systemd-fstab-generator[6353]: Ignoring "noauto" option for root device
	[ +11.425075] systemd-fstab-generator[6474]: Ignoring "noauto" option for root device
	[  +0.097723] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.226751] kauditd_printk_skb: 121 callbacks suppressed
	[Jul30 00:56] kauditd_printk_skb: 1 callbacks suppressed
	[Jul30 00:58] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [090db2af847934ced4239421372ec2339b8a6ea1783591d7de15209408898cfd] <==
	{"level":"info","ts":"2024-07-30T00:53:25.509231Z","caller":"etcdserver/server.go:1431","msg":"leadership transfer starting","local-member-id":"d33e7f1dba1e46ae","current-leader-member-id":"d33e7f1dba1e46ae","transferee-member-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:53:25.509276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae [term 3] starts to transfer leadership to a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:53:25.509362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae sends MsgTimeoutNow to a35d2ed713d63272 immediately as a35d2ed713d63272 already has up-to-date log"}
	{"level":"info","ts":"2024-07-30T00:53:25.513539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae [term: 3] received a MsgVote message with higher term from a35d2ed713d63272 [term: 4]"}
	{"level":"info","ts":"2024-07-30T00:53:25.513594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became follower at term 4"}
	{"level":"info","ts":"2024-07-30T00:53:25.513609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae [logterm: 3, index: 3449, vote: 0] cast MsgVote for a35d2ed713d63272 [logterm: 3, index: 3449] at term 4"}
	{"level":"info","ts":"2024-07-30T00:53:25.513619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d33e7f1dba1e46ae lost leader d33e7f1dba1e46ae at term 4"}
	{"level":"info","ts":"2024-07-30T00:53:25.518763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d33e7f1dba1e46ae elected leader a35d2ed713d63272 at term 4"}
	{"level":"info","ts":"2024-07-30T00:53:25.610332Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"d33e7f1dba1e46ae","old-leader-member-id":"d33e7f1dba1e46ae","new-leader-member-id":"a35d2ed713d63272","took":"101.09779ms"}
	{"level":"info","ts":"2024-07-30T00:53:25.610686Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a35d2ed713d63272"}
	{"level":"warn","ts":"2024-07-30T00:53:25.612797Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:53:25.612916Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a35d2ed713d63272"}
	{"level":"warn","ts":"2024-07-30T00:53:25.614124Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:53:25.614183Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:53:25.614311Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"warn","ts":"2024-07-30T00:53:25.614496Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","error":"context canceled"}
	{"level":"warn","ts":"2024-07-30T00:53:25.614566Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"a35d2ed713d63272","error":"failed to read a35d2ed713d63272 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-30T00:53:25.614644Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"warn","ts":"2024-07-30T00:53:25.614762Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272","error":"context canceled"}
	{"level":"info","ts":"2024-07-30T00:53:25.614798Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:53:25.614826Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:53:25.619686Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"warn","ts":"2024-07-30T00:53:25.619918Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.126:34284","server-name":"","error":"read tcp 192.168.39.80:2380->192.168.39.126:34284: use of closed network connection"}
	{"level":"info","ts":"2024-07-30T00:53:26.166439Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2024-07-30T00:53:26.166499Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-161305","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"]}
	
	
	==> etcd [7838f0d734184c8def1260d2daf1e30f37e11d3985be8ed2eb962f97f0c6a683] <==
	{"level":"info","ts":"2024-07-30T00:58:26.337371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became candidate at term 5"}
	{"level":"info","ts":"2024-07-30T00:58:26.337397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgVoteResp from d33e7f1dba1e46ae at term 5"}
	{"level":"info","ts":"2024-07-30T00:58:26.337492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae [logterm: 4, index: 3450] sent MsgVote request to a35d2ed713d63272 at term 5"}
	{"level":"warn","ts":"2024-07-30T00:58:26.358711Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"d33e7f1dba1e46ae","local-member-attributes":"{Name:ha-161305 ClientURLs:[https://192.168.39.80:2379]}","request-path":"/0/members/d33e7f1dba1e46ae/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-07-30T00:58:26.358851Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d33e7f1dba1e46ae","to":"a35d2ed713d63272","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-30T00:58:26.3589Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d33e7f1dba1e46ae","remote-peer-id":"a35d2ed713d63272"}
	{"level":"info","ts":"2024-07-30T00:58:26.359029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgVoteResp from a35d2ed713d63272 at term 5"}
	{"level":"info","ts":"2024-07-30T00:58:26.359112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-07-30T00:58:26.359197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became leader at term 5"}
	{"level":"info","ts":"2024-07-30T00:58:26.359245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d33e7f1dba1e46ae elected leader d33e7f1dba1e46ae at term 5"}
	{"level":"info","ts":"2024-07-30T00:58:26.371133Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d33e7f1dba1e46ae","local-member-attributes":"{Name:ha-161305 ClientURLs:[https://192.168.39.80:2379]}","request-path":"/0/members/d33e7f1dba1e46ae/attributes","cluster-id":"e6a6fd39da75dc67","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T00:58:26.371189Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:58:26.371927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T00:58:26.372351Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T00:58:26.372447Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T00:58:26.37434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.80:2379"}
	{"level":"info","ts":"2024-07-30T00:58:26.377191Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-30T00:58:26.381343Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:57586","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:57586: write: broken pipe"}
	{"level":"warn","ts":"2024-07-30T00:58:26.382317Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39060","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:39060: write: broken pipe"}
	{"level":"warn","ts":"2024-07-30T00:58:26.384117Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39074","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:39074: write: broken pipe"}
	{"level":"warn","ts":"2024-07-30T00:58:26.386699Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:39086","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:39086: write: broken pipe"}
	{"level":"warn","ts":"2024-07-30T00:58:26.38946Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:57570","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:57570: write: broken pipe"}
	{"level":"warn","ts":"2024-07-30T00:58:26.39216Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:57580","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:57580: write: broken pipe"}
	{"level":"warn","ts":"2024-07-30T00:58:27.390704Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a35d2ed713d63272","rtt":"0s","error":"dial tcp 192.168.39.126:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T00:58:27.390611Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a35d2ed713d63272","rtt":"0s","error":"dial tcp 192.168.39.126:2380: connect: connection refused"}
	
	
	==> kernel <==
	 01:06:29 up 29 min,  0 users,  load average: 0.10, 0.11, 0.19
	Linux ha-161305 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a4940cda3f54ac68f1d3abdcfb892a898fd952fbbb0bb5de1e1dd51184c6d1a5] <==
	I0730 00:52:37.688134       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:52:47.695718       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:52:47.695841       1 main.go:299] handling current node
	I0730 00:52:47.695870       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:52:47.695899       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:52:47.696093       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:52:47.696127       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:52:57.693232       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:52:57.693284       1 main.go:299] handling current node
	I0730 00:52:57.693303       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:52:57.693311       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:52:57.693491       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:52:57.693529       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:53:07.689316       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:53:07.689441       1 main.go:299] handling current node
	I0730 00:53:07.689469       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:53:07.689487       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:53:07.689671       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:53:07.689729       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 00:53:17.689056       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 00:53:17.689165       1 main.go:299] handling current node
	I0730 00:53:17.689198       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 00:53:17.689217       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 00:53:17.689389       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 00:53:17.689416       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f3c56d8ba180012e211971649c57ec997e79a7e48d32f1b76c0fcb1a89f96a35] <==
	I0730 01:05:47.689119       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 01:05:57.695843       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 01:05:57.695881       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 01:05:57.696051       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 01:05:57.696072       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 01:05:57.696158       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 01:05:57.696213       1 main.go:299] handling current node
	I0730 01:06:07.689544       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 01:06:07.689656       1 main.go:299] handling current node
	I0730 01:06:07.689690       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 01:06:07.689708       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 01:06:07.689895       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 01:06:07.689920       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 01:06:17.688355       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 01:06:17.688427       1 main.go:299] handling current node
	I0730 01:06:17.688450       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 01:06:17.688457       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 01:06:17.688605       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 01:06:17.688612       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	I0730 01:06:27.696490       1 main.go:295] Handling node with IPs: map[192.168.39.80:{}]
	I0730 01:06:27.696624       1 main.go:299] handling current node
	I0730 01:06:27.696666       1 main.go:295] Handling node with IPs: map[192.168.39.126:{}]
	I0730 01:06:27.696685       1 main.go:322] Node ha-161305-m02 has CIDR [10.244.1.0/24] 
	I0730 01:06:27.696873       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0730 01:06:27.696905       1 main.go:322] Node ha-161305-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4db133561952508e903f498ac93deedc933d61c6f113f5d0e246ff051dea1320] <==
	I0730 00:56:24.537464       1 options.go:221] external host was not specified, using 192.168.39.80
	I0730 00:56:24.538601       1 server.go:148] Version: v1.30.3
	I0730 00:56:24.538656       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:56:25.102501       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0730 00:56:25.105325       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 00:56:25.108827       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0730 00:56:25.108897       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0730 00:56:25.109097       1 instance.go:299] Using reconciler: lease
	W0730 00:56:45.101409       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0730 00:56:45.102754       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0730 00:56:45.109622       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [79feee35b5d27b3f038bed0602fb04956d627055287b8a01c0a5d5c83ee67ce7] <==
	I0730 00:58:27.666520       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0730 00:58:27.666611       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0730 00:58:27.758110       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 00:58:27.758138       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 00:58:27.758278       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 00:58:27.759101       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 00:58:27.761741       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 00:58:27.769559       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 00:58:27.773895       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0730 00:58:27.774048       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 00:58:27.774297       1 aggregator.go:165] initial CRD sync complete...
	I0730 00:58:27.774332       1 autoregister_controller.go:141] Starting autoregister controller
	I0730 00:58:27.774338       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 00:58:27.774343       1 cache.go:39] Caches are synced for autoregister controller
	W0730 00:58:27.802312       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.126]
	I0730 00:58:27.822714       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 00:58:27.836557       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 00:58:27.836649       1 policy_source.go:224] refreshing policies
	I0730 00:58:27.874560       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0730 00:58:27.904163       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 00:58:27.919050       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0730 00:58:27.924601       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0730 00:58:28.664781       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0730 00:58:29.140275       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.126 192.168.39.80]
	W0730 00:58:49.142489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.80]
	
	
	==> kube-controller-manager [7fb7cbfbcdf21197cd412ccdfdfa61563a708989bcb0c5cb5a4aaa2069c2f041] <==
	I0730 00:58:40.515600       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0730 00:58:40.515683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0730 00:58:40.515799       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0730 00:58:40.587868       1 shared_informer.go:320] Caches are synced for persistent volume
	I0730 00:58:40.590571       1 shared_informer.go:320] Caches are synced for expand
	I0730 00:58:40.609652       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 00:58:40.618752       1 shared_informer.go:320] Caches are synced for attach detach
	I0730 00:58:40.629122       1 shared_informer.go:320] Caches are synced for ephemeral
	I0730 00:58:40.638661       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 00:58:40.649242       1 shared_informer.go:320] Caches are synced for PVC protection
	I0730 00:58:40.670359       1 shared_informer.go:320] Caches are synced for stateful set
	I0730 00:58:41.059061       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 00:58:41.059101       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0730 00:58:41.076191       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 00:59:07.366074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.020157ms"
	I0730 00:59:07.366186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.321µs"
	I0730 00:59:07.739419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.733µs"
	I0730 00:59:47.366403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.140537ms"
	I0730 00:59:47.366808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="165.819µs"
	I0730 01:03:40.546090       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-7sdnf"
	I0730 01:03:40.567441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.682µs"
	I0730 01:03:40.601781       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.478665ms"
	I0730 01:03:40.616009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.192902ms"
	I0730 01:03:40.635665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.608478ms"
	I0730 01:03:40.635740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.876µs"
	
	
	==> kube-controller-manager [c6d936e5ce1b0fe00843c89425e15c6948c485267a3e227326e712a02d879064] <==
	I0730 00:56:25.767189       1 serving.go:380] Generated self-signed cert in-memory
	I0730 00:56:26.207705       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0730 00:56:26.207809       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:56:26.209764       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0730 00:56:26.209998       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0730 00:56:26.210304       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0730 00:56:26.210558       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0730 00:56:46.212859       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.80:8443/healthz\": dial tcp 192.168.39.80:8443: connect: connection refused"
	
	
	==> kube-proxy [3794d8da6d0317335ea4f45df2a8495c0d48548498e71c2527caf07e098ce36f] <==
	I0730 00:48:28.260039       1 server_linux.go:69] "Using iptables proxy"
	E0730 00:48:30.765334       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:33.837250       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:36.908953       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:43.052445       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:48:52.268874       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-161305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0730 00:49:09.781706       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.80"]
	I0730 00:49:09.813651       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 00:49:09.813756       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 00:49:09.813786       1 server_linux.go:165] "Using iptables Proxier"
	I0730 00:49:09.816188       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 00:49:09.816436       1 server.go:872] "Version info" version="v1.30.3"
	I0730 00:49:09.816460       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 00:49:09.817946       1 config.go:192] "Starting service config controller"
	I0730 00:49:09.818049       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 00:49:09.818118       1 config.go:101] "Starting endpoint slice config controller"
	I0730 00:49:09.818137       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 00:49:09.818902       1 config.go:319] "Starting node config controller"
	I0730 00:49:09.818937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 00:49:09.918952       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 00:49:09.919067       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:49:09.919080       1 shared_informer.go:320] Caches are synced for service config
	W0730 00:51:50.555495       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0730 00:51:50.555670       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0730 00:51:50.555731       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [8f58ee5417f0f7c5e891fdb31ef8252e34171e866e561cc2e26be3e7d87510c4] <==
	E0730 00:56:53.486389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:56:59.629258       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:57:11.917283       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0730 00:57:21.132442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:57:21.132552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0730 00:57:24.204849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:57:24.205098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:57:24.205735       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:57:36.493383       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0730 00:57:45.709222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:57:45.709561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:57:48.781253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0730 00:57:57.996698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:57:57.996778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-161305&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:58:01.070511       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0730 00:58:10.284886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:58:10.285042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:58:13.357161       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:58:25.644895       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0730 00:58:25.644943       1 event_broadcaster.go:216] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{ha-161305.17e6d79a19e4bb63  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2024-07-30 00:56:04.375143909 +0000 UTC m=+46.847972247,Series:nil,ReportingController:kube-proxy,ReportingInstance:kube-proxy-ha-161305,Action:StartKubeProxy,Reason:Starting,Regarding:{Node  ha-161305 ha-161305   },Related:nil,Note:,Type:Normal,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	W0730 00:58:25.646877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0730 00:58:25.647216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0730 00:58:43.776989       1 shared_informer.go:320] Caches are synced for node config
	I0730 00:58:47.376662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 00:59:05.876150       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3ab677666e42b35784e015b38f8037f34d4b13e39a9c2d06105ef9a8b12ba32e] <==
	E0730 00:48:56.871699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.80:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:56.909449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.80:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:56.909551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.80:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.034082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.034195       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.221307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.221364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.516154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.516202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.783858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.784029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.858869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.80:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.859101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.80:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:48:57.948932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.80:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:48:57.949834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.80:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:49:04.195620       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 00:49:04.196204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 00:49:04.197280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 00:49:04.197343       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 00:49:04.197477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 00:49:04.197511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 00:49:04.197564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 00:49:04.197591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0730 00:49:11.292412       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0730 00:53:25.479918       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [707e73406ffad3fcd2b18c53714531516a2fd37c1fffde83e70824c6c425b072] <==
	W0730 00:57:48.601253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:57:48.601367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.80:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:57:55.424769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.80:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:57:55.425081       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.80:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:57:57.756606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.80:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	E0730 00:57:57.756713       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.80:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.80:8443: connect: connection refused
	W0730 00:58:17.070695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0730 00:58:17.070931       1 trace.go:236] Trace[852465434]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jul-2024 00:58:07.069) (total time: 10001ms):
	Trace[852465434]: ---"Objects listed" error:Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:58:17.070)
	Trace[852465434]: [10.001720156s] [10.001720156s] END
	E0730 00:58:17.071032       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.80:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0730 00:58:18.667371       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.80:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0730 00:58:18.667558       1 trace.go:236] Trace[868101016]: "Reflector ListAndWatch" name:runtime/asm_amd64.s:1695 (30-Jul-2024 00:58:08.666) (total time: 10001ms):
	Trace[868101016]: ---"Objects listed" error:Get "https://192.168.39.80:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:58:18.667)
	Trace[868101016]: [10.001245199s] [10.001245199s] END
	E0730 00:58:18.667598       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.80:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0730 00:58:21.148154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0730 00:58:21.148272       1 trace.go:236] Trace[1555013490]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jul-2024 00:58:11.146) (total time: 10001ms):
	Trace[1555013490]: ---"Objects listed" error:Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:58:21.148)
	Trace[1555013490]: [10.001356041s] [10.001356041s] END
	E0730 00:58:21.148308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.80:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0730 00:58:27.783596       1 trace.go:236] Trace[1339321642]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jul-2024 00:58:17.768) (total time: 10014ms):
	Trace[1339321642]: ---"Objects listed" error:<nil> 10014ms (00:58:27.783)
	Trace[1339321642]: [10.014676845s] [10.014676845s] END
	I0730 00:58:59.177114       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 01:02:36 ha-161305 kubelet[1372]: I0730 01:02:36.355531    1372 scope.go:117] "RemoveContainer" containerID="4b6f293d623755763934ee2832fc59acd26c230d0966ef668ca5713a09e87d1c"
	Jul 30 01:02:36 ha-161305 kubelet[1372]: E0730 01:02:36.355714    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75260b22-5ffc-4848-8c70-5b9cb3f010bf)\"" pod="kube-system/storage-provisioner" podUID="75260b22-5ffc-4848-8c70-5b9cb3f010bf"
	Jul 30 01:02:50 ha-161305 kubelet[1372]: I0730 01:02:50.354532    1372 scope.go:117] "RemoveContainer" containerID="4b6f293d623755763934ee2832fc59acd26c230d0966ef668ca5713a09e87d1c"
	Jul 30 01:02:50 ha-161305 kubelet[1372]: E0730 01:02:50.354737    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75260b22-5ffc-4848-8c70-5b9cb3f010bf)\"" pod="kube-system/storage-provisioner" podUID="75260b22-5ffc-4848-8c70-5b9cb3f010bf"
	Jul 30 01:03:04 ha-161305 kubelet[1372]: I0730 01:03:04.354538    1372 scope.go:117] "RemoveContainer" containerID="4b6f293d623755763934ee2832fc59acd26c230d0966ef668ca5713a09e87d1c"
	Jul 30 01:03:08 ha-161305 kubelet[1372]: E0730 01:03:08.372740    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:03:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:03:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:03:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:03:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 01:04:08 ha-161305 kubelet[1372]: E0730 01:04:08.372744    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:04:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:04:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:04:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:04:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 01:05:08 ha-161305 kubelet[1372]: E0730 01:05:08.372627    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:05:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:05:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:05:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:05:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 01:06:08 ha-161305 kubelet[1372]: E0730 01:06:08.372372    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:06:08 ha-161305 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:06:08 ha-161305 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:06:08 ha-161305 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:06:08 ha-161305 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 01:06:28.385127  528339 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19346-495103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-161305 -n ha-161305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-161305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-ztwz5
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-161305 describe pod busybox-fc5497c4f-ztwz5
helpers_test.go:282: (dbg) kubectl --context ha-161305 describe pod busybox-fc5497c4f-ztwz5:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-ztwz5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59f55 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-59f55:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  2m50s  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (785.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (329.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-543365
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-543365
E0730 01:16:10.081005  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-543365: exit status 82 (2m2.686859866s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-543365-m03"  ...
	* Stopping node "multinode-543365-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-543365" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-543365 --wait=true -v=8 --alsologtostderr
E0730 01:18:42.935038  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-543365 --wait=true -v=8 --alsologtostderr: (3m24.406386554s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-543365
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-543365 -n multinode-543365
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-543365 logs -n 25: (1.443547487s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile929286498/001/cp-test_multinode-543365-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365:/home/docker/cp-test_multinode-543365-m02_multinode-543365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365 sudo cat                                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m02_multinode-543365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03:/home/docker/cp-test_multinode-543365-m02_multinode-543365-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365-m03 sudo cat                                   | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m02_multinode-543365-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp testdata/cp-test.txt                                                | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile929286498/001/cp-test_multinode-543365-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365:/home/docker/cp-test_multinode-543365-m03_multinode-543365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365 sudo cat                                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m03_multinode-543365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02:/home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365-m02 sudo cat                                   | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-543365 node stop m03                                                          | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	| node    | multinode-543365 node start                                                             | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-543365                                                                | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:14 UTC |                     |
	| stop    | -p multinode-543365                                                                     | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:14 UTC |                     |
	| start   | -p multinode-543365                                                                     | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:16 UTC | 30 Jul 24 01:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-543365                                                                | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:19 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 01:16:21
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 01:16:21.191281  535383 out.go:291] Setting OutFile to fd 1 ...
	I0730 01:16:21.191396  535383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:16:21.191404  535383 out.go:304] Setting ErrFile to fd 2...
	I0730 01:16:21.191408  535383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:16:21.191608  535383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 01:16:21.192167  535383 out.go:298] Setting JSON to false
	I0730 01:16:21.193163  535383 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10723,"bootTime":1722291458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 01:16:21.193229  535383 start.go:139] virtualization: kvm guest
	I0730 01:16:21.195638  535383 out.go:177] * [multinode-543365] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 01:16:21.197529  535383 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 01:16:21.197555  535383 notify.go:220] Checking for updates...
	I0730 01:16:21.200046  535383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 01:16:21.201440  535383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 01:16:21.202886  535383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:16:21.204270  535383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 01:16:21.205701  535383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 01:16:21.207407  535383 config.go:182] Loaded profile config "multinode-543365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 01:16:21.207503  535383 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 01:16:21.208060  535383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:16:21.208126  535383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:16:21.223599  535383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0730 01:16:21.224177  535383 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:16:21.224758  535383 main.go:141] libmachine: Using API Version  1
	I0730 01:16:21.224777  535383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:16:21.225217  535383 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:16:21.225452  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:16:21.263911  535383 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 01:16:21.265257  535383 start.go:297] selected driver: kvm2
	I0730 01:16:21.265275  535383 start.go:901] validating driver "kvm2" against &{Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:16:21.265433  535383 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 01:16:21.265767  535383 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:16:21.265865  535383 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 01:16:21.281872  535383 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 01:16:21.282576  535383 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 01:16:21.282632  535383 cni.go:84] Creating CNI manager for ""
	I0730 01:16:21.282644  535383 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 01:16:21.282708  535383 start.go:340] cluster config:
	{Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-543365 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:16:21.282860  535383 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:16:21.284523  535383 out.go:177] * Starting "multinode-543365" primary control-plane node in "multinode-543365" cluster
	I0730 01:16:21.285677  535383 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 01:16:21.285720  535383 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 01:16:21.285734  535383 cache.go:56] Caching tarball of preloaded images
	I0730 01:16:21.285830  535383 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 01:16:21.285843  535383 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 01:16:21.285959  535383 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/config.json ...
	I0730 01:16:21.286198  535383 start.go:360] acquireMachinesLock for multinode-543365: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 01:16:21.286249  535383 start.go:364] duration metric: took 29.734µs to acquireMachinesLock for "multinode-543365"
	I0730 01:16:21.286270  535383 start.go:96] Skipping create...Using existing machine configuration
	I0730 01:16:21.286280  535383 fix.go:54] fixHost starting: 
	I0730 01:16:21.286586  535383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:16:21.286626  535383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:16:21.301307  535383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0730 01:16:21.301858  535383 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:16:21.302508  535383 main.go:141] libmachine: Using API Version  1
	I0730 01:16:21.302533  535383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:16:21.302830  535383 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:16:21.303025  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:16:21.303187  535383 main.go:141] libmachine: (multinode-543365) Calling .GetState
	I0730 01:16:21.304788  535383 fix.go:112] recreateIfNeeded on multinode-543365: state=Running err=<nil>
	W0730 01:16:21.304812  535383 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 01:16:21.307737  535383 out.go:177] * Updating the running kvm2 "multinode-543365" VM ...
	I0730 01:16:21.309047  535383 machine.go:94] provisionDockerMachine start ...
	I0730 01:16:21.309078  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:16:21.309309  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.312674  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.313281  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.313326  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.313545  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.313759  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.313921  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.314063  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.314236  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:21.314600  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:21.314614  535383 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 01:16:21.435368  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-543365
	
	I0730 01:16:21.435407  535383 main.go:141] libmachine: (multinode-543365) Calling .GetMachineName
	I0730 01:16:21.435726  535383 buildroot.go:166] provisioning hostname "multinode-543365"
	I0730 01:16:21.435758  535383 main.go:141] libmachine: (multinode-543365) Calling .GetMachineName
	I0730 01:16:21.435961  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.439671  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.440109  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.440140  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.440279  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.440480  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.440656  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.440816  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.441013  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:21.441194  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:21.441208  535383 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-543365 && echo "multinode-543365" | sudo tee /etc/hostname
	I0730 01:16:21.568904  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-543365
	
	I0730 01:16:21.568953  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.572044  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.572529  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.572563  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.572750  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.572952  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.573131  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.573260  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.573427  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:21.573589  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:21.573606  535383 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-543365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-543365/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-543365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 01:16:21.685395  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:16:21.685434  535383 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 01:16:21.685479  535383 buildroot.go:174] setting up certificates
	I0730 01:16:21.685488  535383 provision.go:84] configureAuth start
	I0730 01:16:21.685501  535383 main.go:141] libmachine: (multinode-543365) Calling .GetMachineName
	I0730 01:16:21.685836  535383 main.go:141] libmachine: (multinode-543365) Calling .GetIP
	I0730 01:16:21.688368  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.688815  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.688846  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.689062  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.691451  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.691779  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.691810  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.691943  535383 provision.go:143] copyHostCerts
	I0730 01:16:21.691980  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:16:21.692035  535383 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 01:16:21.692054  535383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:16:21.692139  535383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 01:16:21.692238  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:16:21.692264  535383 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 01:16:21.692271  535383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:16:21.692310  535383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 01:16:21.692383  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:16:21.692405  535383 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 01:16:21.692412  535383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:16:21.692449  535383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 01:16:21.692519  535383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.multinode-543365 san=[127.0.0.1 192.168.39.235 localhost minikube multinode-543365]
	I0730 01:16:21.839675  535383 provision.go:177] copyRemoteCerts
	I0730 01:16:21.839740  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 01:16:21.839768  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.842411  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.842822  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.842850  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.843044  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.843240  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.843434  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.843590  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:16:21.926837  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 01:16:21.926922  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 01:16:21.953564  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 01:16:21.953647  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0730 01:16:21.976844  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 01:16:21.976925  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0730 01:16:22.000511  535383 provision.go:87] duration metric: took 315.009219ms to configureAuth
	I0730 01:16:22.000542  535383 buildroot.go:189] setting minikube options for container-runtime
	I0730 01:16:22.000767  535383 config.go:182] Loaded profile config "multinode-543365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 01:16:22.000843  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:22.003709  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:22.004159  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:22.004193  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:22.004391  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:22.004589  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:22.004770  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:22.004902  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:22.005074  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:22.005228  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:22.005245  535383 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 01:17:52.701740  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 01:17:52.701780  535383 machine.go:97] duration metric: took 1m31.392713504s to provisionDockerMachine
	I0730 01:17:52.701798  535383 start.go:293] postStartSetup for "multinode-543365" (driver="kvm2")
	I0730 01:17:52.701814  535383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 01:17:52.701845  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.702350  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 01:17:52.702391  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.706505  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.709429  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.709464  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.709694  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.710064  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.710255  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.710459  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:17:52.804508  535383 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 01:17:52.809498  535383 command_runner.go:130] > NAME=Buildroot
	I0730 01:17:52.809517  535383 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0730 01:17:52.809523  535383 command_runner.go:130] > ID=buildroot
	I0730 01:17:52.809529  535383 command_runner.go:130] > VERSION_ID=2023.02.9
	I0730 01:17:52.809536  535383 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0730 01:17:52.809628  535383 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 01:17:52.809648  535383 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 01:17:52.809695  535383 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 01:17:52.809775  535383 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 01:17:52.809803  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 01:17:52.809900  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 01:17:52.823698  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:17:52.849407  535383 start.go:296] duration metric: took 147.590667ms for postStartSetup
	I0730 01:17:52.849460  535383 fix.go:56] duration metric: took 1m31.563180244s for fixHost
	I0730 01:17:52.849495  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.852582  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.853078  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.853109  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.853295  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.853482  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.853639  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.853811  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.854043  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:17:52.854209  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:17:52.854221  535383 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 01:17:52.965408  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722302272.942958518
	
	I0730 01:17:52.965446  535383 fix.go:216] guest clock: 1722302272.942958518
	I0730 01:17:52.965455  535383 fix.go:229] Guest: 2024-07-30 01:17:52.942958518 +0000 UTC Remote: 2024-07-30 01:17:52.849472098 +0000 UTC m=+91.694556362 (delta=93.48642ms)
	I0730 01:17:52.965480  535383 fix.go:200] guest clock delta is within tolerance: 93.48642ms
	I0730 01:17:52.965487  535383 start.go:83] releasing machines lock for "multinode-543365", held for 1m31.679225146s
	I0730 01:17:52.965513  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.965746  535383 main.go:141] libmachine: (multinode-543365) Calling .GetIP
	I0730 01:17:52.968413  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.968802  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.968837  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.969003  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.969618  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.969811  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.969927  535383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 01:17:52.969969  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.970035  535383 ssh_runner.go:195] Run: cat /version.json
	I0730 01:17:52.970069  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.972727  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.972895  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.973231  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.973261  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.973415  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.973562  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.973582  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.973586  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.973748  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.973799  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.973894  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:17:52.974002  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.974147  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.974307  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:17:53.080047  535383 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0730 01:17:53.080761  535383 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0730 01:17:53.080941  535383 ssh_runner.go:195] Run: systemctl --version
	I0730 01:17:53.086625  535383 command_runner.go:130] > systemd 252 (252)
	I0730 01:17:53.086667  535383 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0730 01:17:53.086715  535383 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 01:17:53.244970  535383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0730 01:17:53.253677  535383 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0730 01:17:53.253891  535383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 01:17:53.253955  535383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 01:17:53.264113  535383 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 01:17:53.264141  535383 start.go:495] detecting cgroup driver to use...
	I0730 01:17:53.264209  535383 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 01:17:53.281969  535383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 01:17:53.295630  535383 docker.go:217] disabling cri-docker service (if available) ...
	I0730 01:17:53.295708  535383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 01:17:53.310774  535383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 01:17:53.325215  535383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 01:17:53.496754  535383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 01:17:53.634308  535383 docker.go:233] disabling docker service ...
	I0730 01:17:53.634388  535383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 01:17:53.649661  535383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 01:17:53.663051  535383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 01:17:53.800061  535383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 01:17:53.934742  535383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 01:17:53.948670  535383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 01:17:53.967200  535383 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0730 01:17:53.967243  535383 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 01:17:53.967296  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:53.977455  535383 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 01:17:53.977514  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:53.987231  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:53.996764  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.006426  535383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 01:17:54.016837  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.026449  535383 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.036535  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.046066  535383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 01:17:54.054487  535383 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0730 01:17:54.054580  535383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 01:17:54.063160  535383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:17:54.194201  535383 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 01:17:57.428759  535383 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.234509364s)
	I0730 01:17:57.428802  535383 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 01:17:57.428861  535383 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 01:17:57.433794  535383 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0730 01:17:57.433820  535383 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0730 01:17:57.433831  535383 command_runner.go:130] > Device: 0,22	Inode: 1346        Links: 1
	I0730 01:17:57.433839  535383 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0730 01:17:57.433847  535383 command_runner.go:130] > Access: 2024-07-30 01:17:57.306296141 +0000
	I0730 01:17:57.433856  535383 command_runner.go:130] > Modify: 2024-07-30 01:17:57.306296141 +0000
	I0730 01:17:57.433870  535383 command_runner.go:130] > Change: 2024-07-30 01:17:57.306296141 +0000
	I0730 01:17:57.433879  535383 command_runner.go:130] >  Birth: -
	I0730 01:17:57.433903  535383 start.go:563] Will wait 60s for crictl version
	I0730 01:17:57.433950  535383 ssh_runner.go:195] Run: which crictl
	I0730 01:17:57.437289  535383 command_runner.go:130] > /usr/bin/crictl
	I0730 01:17:57.437400  535383 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 01:17:57.479036  535383 command_runner.go:130] > Version:  0.1.0
	I0730 01:17:57.479064  535383 command_runner.go:130] > RuntimeName:  cri-o
	I0730 01:17:57.479070  535383 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0730 01:17:57.479078  535383 command_runner.go:130] > RuntimeApiVersion:  v1
	I0730 01:17:57.479144  535383 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 01:17:57.479263  535383 ssh_runner.go:195] Run: crio --version
	I0730 01:17:57.504577  535383 command_runner.go:130] > crio version 1.29.1
	I0730 01:17:57.504607  535383 command_runner.go:130] > Version:        1.29.1
	I0730 01:17:57.504615  535383 command_runner.go:130] > GitCommit:      unknown
	I0730 01:17:57.504621  535383 command_runner.go:130] > GitCommitDate:  unknown
	I0730 01:17:57.504627  535383 command_runner.go:130] > GitTreeState:   clean
	I0730 01:17:57.504641  535383 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0730 01:17:57.504649  535383 command_runner.go:130] > GoVersion:      go1.21.6
	I0730 01:17:57.504655  535383 command_runner.go:130] > Compiler:       gc
	I0730 01:17:57.504662  535383 command_runner.go:130] > Platform:       linux/amd64
	I0730 01:17:57.504669  535383 command_runner.go:130] > Linkmode:       dynamic
	I0730 01:17:57.504673  535383 command_runner.go:130] > BuildTags:      
	I0730 01:17:57.504678  535383 command_runner.go:130] >   containers_image_ostree_stub
	I0730 01:17:57.504682  535383 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0730 01:17:57.504686  535383 command_runner.go:130] >   btrfs_noversion
	I0730 01:17:57.504691  535383 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0730 01:17:57.504700  535383 command_runner.go:130] >   libdm_no_deferred_remove
	I0730 01:17:57.504715  535383 command_runner.go:130] >   seccomp
	I0730 01:17:57.504724  535383 command_runner.go:130] > LDFlags:          unknown
	I0730 01:17:57.504731  535383 command_runner.go:130] > SeccompEnabled:   true
	I0730 01:17:57.504737  535383 command_runner.go:130] > AppArmorEnabled:  false
	I0730 01:17:57.505825  535383 ssh_runner.go:195] Run: crio --version
	I0730 01:17:57.531848  535383 command_runner.go:130] > crio version 1.29.1
	I0730 01:17:57.531872  535383 command_runner.go:130] > Version:        1.29.1
	I0730 01:17:57.531879  535383 command_runner.go:130] > GitCommit:      unknown
	I0730 01:17:57.531882  535383 command_runner.go:130] > GitCommitDate:  unknown
	I0730 01:17:57.531886  535383 command_runner.go:130] > GitTreeState:   clean
	I0730 01:17:57.531892  535383 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0730 01:17:57.531896  535383 command_runner.go:130] > GoVersion:      go1.21.6
	I0730 01:17:57.531900  535383 command_runner.go:130] > Compiler:       gc
	I0730 01:17:57.531908  535383 command_runner.go:130] > Platform:       linux/amd64
	I0730 01:17:57.531911  535383 command_runner.go:130] > Linkmode:       dynamic
	I0730 01:17:57.531921  535383 command_runner.go:130] > BuildTags:      
	I0730 01:17:57.531926  535383 command_runner.go:130] >   containers_image_ostree_stub
	I0730 01:17:57.531930  535383 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0730 01:17:57.531933  535383 command_runner.go:130] >   btrfs_noversion
	I0730 01:17:57.531937  535383 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0730 01:17:57.531941  535383 command_runner.go:130] >   libdm_no_deferred_remove
	I0730 01:17:57.531945  535383 command_runner.go:130] >   seccomp
	I0730 01:17:57.531949  535383 command_runner.go:130] > LDFlags:          unknown
	I0730 01:17:57.531953  535383 command_runner.go:130] > SeccompEnabled:   true
	I0730 01:17:57.531959  535383 command_runner.go:130] > AppArmorEnabled:  false
	I0730 01:17:57.534871  535383 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 01:17:57.536180  535383 main.go:141] libmachine: (multinode-543365) Calling .GetIP
	I0730 01:17:57.538600  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:57.538954  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:57.538982  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:57.539164  535383 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 01:17:57.543178  535383 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0730 01:17:57.543305  535383 kubeadm.go:883] updating cluster {Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 01:17:57.543477  535383 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 01:17:57.543536  535383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:17:57.592034  535383 command_runner.go:130] > {
	I0730 01:17:57.592065  535383 command_runner.go:130] >   "images": [
	I0730 01:17:57.592072  535383 command_runner.go:130] >     {
	I0730 01:17:57.592083  535383 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0730 01:17:57.592090  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592099  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0730 01:17:57.592104  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592110  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592124  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0730 01:17:57.592142  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0730 01:17:57.592150  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592158  535383 command_runner.go:130] >       "size": "87165492",
	I0730 01:17:57.592167  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592173  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592188  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592197  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592205  535383 command_runner.go:130] >     },
	I0730 01:17:57.592211  535383 command_runner.go:130] >     {
	I0730 01:17:57.592223  535383 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0730 01:17:57.592233  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592244  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0730 01:17:57.592253  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592262  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592276  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0730 01:17:57.592289  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0730 01:17:57.592297  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592306  535383 command_runner.go:130] >       "size": "87174707",
	I0730 01:17:57.592312  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592325  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592331  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592335  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592340  535383 command_runner.go:130] >     },
	I0730 01:17:57.592343  535383 command_runner.go:130] >     {
	I0730 01:17:57.592352  535383 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0730 01:17:57.592358  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592363  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0730 01:17:57.592369  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592375  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592383  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0730 01:17:57.592393  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0730 01:17:57.592399  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592405  535383 command_runner.go:130] >       "size": "1363676",
	I0730 01:17:57.592411  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592416  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592422  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592427  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592432  535383 command_runner.go:130] >     },
	I0730 01:17:57.592436  535383 command_runner.go:130] >     {
	I0730 01:17:57.592444  535383 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0730 01:17:57.592450  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592457  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0730 01:17:57.592460  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592464  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592472  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0730 01:17:57.592485  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0730 01:17:57.592491  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592495  535383 command_runner.go:130] >       "size": "31470524",
	I0730 01:17:57.592501  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592505  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592511  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592515  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592521  535383 command_runner.go:130] >     },
	I0730 01:17:57.592531  535383 command_runner.go:130] >     {
	I0730 01:17:57.592539  535383 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0730 01:17:57.592545  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592550  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0730 01:17:57.592556  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592560  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592570  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0730 01:17:57.592579  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0730 01:17:57.592585  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592589  535383 command_runner.go:130] >       "size": "61245718",
	I0730 01:17:57.592595  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592600  535383 command_runner.go:130] >       "username": "nonroot",
	I0730 01:17:57.592606  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592610  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592615  535383 command_runner.go:130] >     },
	I0730 01:17:57.592619  535383 command_runner.go:130] >     {
	I0730 01:17:57.592624  535383 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0730 01:17:57.592630  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592635  535383 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0730 01:17:57.592641  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592645  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592654  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0730 01:17:57.592663  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0730 01:17:57.592668  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592672  535383 command_runner.go:130] >       "size": "150779692",
	I0730 01:17:57.592677  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.592681  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.592687  535383 command_runner.go:130] >       },
	I0730 01:17:57.592690  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592696  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592700  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592722  535383 command_runner.go:130] >     },
	I0730 01:17:57.592730  535383 command_runner.go:130] >     {
	I0730 01:17:57.592740  535383 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0730 01:17:57.592747  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592752  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0730 01:17:57.592758  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592762  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592771  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0730 01:17:57.592781  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0730 01:17:57.592784  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592790  535383 command_runner.go:130] >       "size": "117609954",
	I0730 01:17:57.592794  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.592800  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.592803  535383 command_runner.go:130] >       },
	I0730 01:17:57.592809  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592814  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592819  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592823  535383 command_runner.go:130] >     },
	I0730 01:17:57.592828  535383 command_runner.go:130] >     {
	I0730 01:17:57.592834  535383 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0730 01:17:57.592840  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592846  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0730 01:17:57.592851  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592855  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592871  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0730 01:17:57.592881  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0730 01:17:57.592887  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592891  535383 command_runner.go:130] >       "size": "112198984",
	I0730 01:17:57.592897  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.592901  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.592904  535383 command_runner.go:130] >       },
	I0730 01:17:57.592908  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592911  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592915  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592918  535383 command_runner.go:130] >     },
	I0730 01:17:57.592921  535383 command_runner.go:130] >     {
	I0730 01:17:57.592927  535383 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0730 01:17:57.592930  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592935  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0730 01:17:57.592938  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592942  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592959  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0730 01:17:57.592966  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0730 01:17:57.592969  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592973  535383 command_runner.go:130] >       "size": "85953945",
	I0730 01:17:57.592976  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592980  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592983  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592986  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592989  535383 command_runner.go:130] >     },
	I0730 01:17:57.592992  535383 command_runner.go:130] >     {
	I0730 01:17:57.592997  535383 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0730 01:17:57.593001  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.593006  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0730 01:17:57.593009  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593012  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.593019  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0730 01:17:57.593025  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0730 01:17:57.593028  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593032  535383 command_runner.go:130] >       "size": "63051080",
	I0730 01:17:57.593036  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.593040  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.593044  535383 command_runner.go:130] >       },
	I0730 01:17:57.593048  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.593052  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.593057  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.593063  535383 command_runner.go:130] >     },
	I0730 01:17:57.593071  535383 command_runner.go:130] >     {
	I0730 01:17:57.593080  535383 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0730 01:17:57.593089  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.593099  535383 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0730 01:17:57.593104  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593113  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.593127  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0730 01:17:57.593137  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0730 01:17:57.593143  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593147  535383 command_runner.go:130] >       "size": "750414",
	I0730 01:17:57.593153  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.593157  535383 command_runner.go:130] >         "value": "65535"
	I0730 01:17:57.593162  535383 command_runner.go:130] >       },
	I0730 01:17:57.593167  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.593173  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.593177  535383 command_runner.go:130] >       "pinned": true
	I0730 01:17:57.593182  535383 command_runner.go:130] >     }
	I0730 01:17:57.593185  535383 command_runner.go:130] >   ]
	I0730 01:17:57.593188  535383 command_runner.go:130] > }
	I0730 01:17:57.593379  535383 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 01:17:57.593391  535383 crio.go:433] Images already preloaded, skipping extraction
	I0730 01:17:57.593445  535383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:17:57.623888  535383 command_runner.go:130] > {
	I0730 01:17:57.623916  535383 command_runner.go:130] >   "images": [
	I0730 01:17:57.623923  535383 command_runner.go:130] >     {
	I0730 01:17:57.623936  535383 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0730 01:17:57.623943  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.623951  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0730 01:17:57.623957  535383 command_runner.go:130] >       ],
	I0730 01:17:57.623965  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.623982  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0730 01:17:57.623995  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0730 01:17:57.624004  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624014  535383 command_runner.go:130] >       "size": "87165492",
	I0730 01:17:57.624023  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624032  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624043  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624052  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624058  535383 command_runner.go:130] >     },
	I0730 01:17:57.624066  535383 command_runner.go:130] >     {
	I0730 01:17:57.624080  535383 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0730 01:17:57.624088  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624099  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0730 01:17:57.624107  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624113  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624127  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0730 01:17:57.624142  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0730 01:17:57.624151  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624160  535383 command_runner.go:130] >       "size": "87174707",
	I0730 01:17:57.624169  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624186  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624203  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624215  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624223  535383 command_runner.go:130] >     },
	I0730 01:17:57.624232  535383 command_runner.go:130] >     {
	I0730 01:17:57.624242  535383 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0730 01:17:57.624251  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624262  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0730 01:17:57.624271  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624280  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624296  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0730 01:17:57.624310  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0730 01:17:57.624324  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624334  535383 command_runner.go:130] >       "size": "1363676",
	I0730 01:17:57.624343  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624352  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624365  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624374  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624382  535383 command_runner.go:130] >     },
	I0730 01:17:57.624391  535383 command_runner.go:130] >     {
	I0730 01:17:57.624401  535383 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0730 01:17:57.624410  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624420  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0730 01:17:57.624426  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624430  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624440  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0730 01:17:57.624457  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0730 01:17:57.624463  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624467  535383 command_runner.go:130] >       "size": "31470524",
	I0730 01:17:57.624473  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624477  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624483  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624487  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624492  535383 command_runner.go:130] >     },
	I0730 01:17:57.624496  535383 command_runner.go:130] >     {
	I0730 01:17:57.624502  535383 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0730 01:17:57.624508  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624519  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0730 01:17:57.624525  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624529  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624538  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0730 01:17:57.624547  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0730 01:17:57.624553  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624557  535383 command_runner.go:130] >       "size": "61245718",
	I0730 01:17:57.624563  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624567  535383 command_runner.go:130] >       "username": "nonroot",
	I0730 01:17:57.624571  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624577  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624580  535383 command_runner.go:130] >     },
	I0730 01:17:57.624585  535383 command_runner.go:130] >     {
	I0730 01:17:57.624591  535383 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0730 01:17:57.624597  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624602  535383 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0730 01:17:57.624607  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624611  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624620  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0730 01:17:57.624629  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0730 01:17:57.624634  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624638  535383 command_runner.go:130] >       "size": "150779692",
	I0730 01:17:57.624644  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.624648  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.624655  535383 command_runner.go:130] >       },
	I0730 01:17:57.624661  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624665  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624669  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624674  535383 command_runner.go:130] >     },
	I0730 01:17:57.624677  535383 command_runner.go:130] >     {
	I0730 01:17:57.624683  535383 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0730 01:17:57.624689  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624694  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0730 01:17:57.624700  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624714  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624729  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0730 01:17:57.624750  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0730 01:17:57.624757  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624761  535383 command_runner.go:130] >       "size": "117609954",
	I0730 01:17:57.624767  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.624770  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.624778  535383 command_runner.go:130] >       },
	I0730 01:17:57.624787  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624796  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624805  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624813  535383 command_runner.go:130] >     },
	I0730 01:17:57.624817  535383 command_runner.go:130] >     {
	I0730 01:17:57.624828  535383 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0730 01:17:57.624836  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624847  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0730 01:17:57.624854  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624861  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624894  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0730 01:17:57.624911  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0730 01:17:57.624916  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624925  535383 command_runner.go:130] >       "size": "112198984",
	I0730 01:17:57.624933  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.624941  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.624946  535383 command_runner.go:130] >       },
	I0730 01:17:57.624953  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624959  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624968  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624973  535383 command_runner.go:130] >     },
	I0730 01:17:57.624980  535383 command_runner.go:130] >     {
	I0730 01:17:57.624990  535383 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0730 01:17:57.624998  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.625007  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0730 01:17:57.625015  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625022  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.625036  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0730 01:17:57.625056  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0730 01:17:57.625061  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625079  535383 command_runner.go:130] >       "size": "85953945",
	I0730 01:17:57.625089  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.625095  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.625103  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.625109  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.625117  535383 command_runner.go:130] >     },
	I0730 01:17:57.625122  535383 command_runner.go:130] >     {
	I0730 01:17:57.625135  535383 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0730 01:17:57.625143  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.625153  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0730 01:17:57.625161  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625167  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.625181  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0730 01:17:57.625195  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0730 01:17:57.625204  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625212  535383 command_runner.go:130] >       "size": "63051080",
	I0730 01:17:57.625220  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.625230  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.625235  535383 command_runner.go:130] >       },
	I0730 01:17:57.625245  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.625254  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.625263  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.625272  535383 command_runner.go:130] >     },
	I0730 01:17:57.625279  535383 command_runner.go:130] >     {
	I0730 01:17:57.625286  535383 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0730 01:17:57.625292  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.625297  535383 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0730 01:17:57.625302  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625307  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.625321  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0730 01:17:57.625330  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0730 01:17:57.625335  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625340  535383 command_runner.go:130] >       "size": "750414",
	I0730 01:17:57.625345  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.625350  535383 command_runner.go:130] >         "value": "65535"
	I0730 01:17:57.625356  535383 command_runner.go:130] >       },
	I0730 01:17:57.625367  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.625374  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.625378  535383 command_runner.go:130] >       "pinned": true
	I0730 01:17:57.625383  535383 command_runner.go:130] >     }
	I0730 01:17:57.625386  535383 command_runner.go:130] >   ]
	I0730 01:17:57.625392  535383 command_runner.go:130] > }
	I0730 01:17:57.625525  535383 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 01:17:57.625537  535383 cache_images.go:84] Images are preloaded, skipping loading
	I0730 01:17:57.625544  535383 kubeadm.go:934] updating node { 192.168.39.235 8443 v1.30.3 crio true true} ...
	I0730 01:17:57.625658  535383 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-543365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 01:17:57.625727  535383 ssh_runner.go:195] Run: crio config
	I0730 01:17:57.657701  535383 command_runner.go:130] ! time="2024-07-30 01:17:57.635208127Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0730 01:17:57.663983  535383 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0730 01:17:57.671475  535383 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0730 01:17:57.671499  535383 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0730 01:17:57.671505  535383 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0730 01:17:57.671509  535383 command_runner.go:130] > #
	I0730 01:17:57.671515  535383 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0730 01:17:57.671521  535383 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0730 01:17:57.671527  535383 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0730 01:17:57.671534  535383 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0730 01:17:57.671537  535383 command_runner.go:130] > # reload'.
	I0730 01:17:57.671543  535383 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0730 01:17:57.671548  535383 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0730 01:17:57.671554  535383 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0730 01:17:57.671562  535383 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0730 01:17:57.671571  535383 command_runner.go:130] > [crio]
	I0730 01:17:57.671580  535383 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0730 01:17:57.671587  535383 command_runner.go:130] > # containers images, in this directory.
	I0730 01:17:57.671597  535383 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0730 01:17:57.671607  535383 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0730 01:17:57.671615  535383 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0730 01:17:57.671622  535383 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0730 01:17:57.671626  535383 command_runner.go:130] > # imagestore = ""
	I0730 01:17:57.671632  535383 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0730 01:17:57.671642  535383 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0730 01:17:57.671646  535383 command_runner.go:130] > storage_driver = "overlay"
	I0730 01:17:57.671659  535383 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0730 01:17:57.671672  535383 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0730 01:17:57.671686  535383 command_runner.go:130] > storage_option = [
	I0730 01:17:57.671693  535383 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0730 01:17:57.671697  535383 command_runner.go:130] > ]
	I0730 01:17:57.671705  535383 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0730 01:17:57.671714  535383 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0730 01:17:57.671720  535383 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0730 01:17:57.671726  535383 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0730 01:17:57.671733  535383 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0730 01:17:57.671740  535383 command_runner.go:130] > # always happen on a node reboot
	I0730 01:17:57.671751  535383 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0730 01:17:57.671769  535383 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0730 01:17:57.671781  535383 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0730 01:17:57.671788  535383 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0730 01:17:57.671793  535383 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0730 01:17:57.671802  535383 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0730 01:17:57.671811  535383 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0730 01:17:57.671817  535383 command_runner.go:130] > # internal_wipe = true
	I0730 01:17:57.671828  535383 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0730 01:17:57.671840  535383 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0730 01:17:57.671850  535383 command_runner.go:130] > # internal_repair = false
	I0730 01:17:57.671861  535383 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0730 01:17:57.671874  535383 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0730 01:17:57.671884  535383 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0730 01:17:57.671891  535383 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0730 01:17:57.671897  535383 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0730 01:17:57.671902  535383 command_runner.go:130] > [crio.api]
	I0730 01:17:57.671910  535383 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0730 01:17:57.671920  535383 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0730 01:17:57.671931  535383 command_runner.go:130] > # IP address on which the stream server will listen.
	I0730 01:17:57.671941  535383 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0730 01:17:57.671954  535383 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0730 01:17:57.671964  535383 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0730 01:17:57.671973  535383 command_runner.go:130] > # stream_port = "0"
	I0730 01:17:57.671982  535383 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0730 01:17:57.671990  535383 command_runner.go:130] > # stream_enable_tls = false
	I0730 01:17:57.671999  535383 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0730 01:17:57.672010  535383 command_runner.go:130] > # stream_idle_timeout = ""
	I0730 01:17:57.672025  535383 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0730 01:17:57.672038  535383 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0730 01:17:57.672047  535383 command_runner.go:130] > # minutes.
	I0730 01:17:57.672057  535383 command_runner.go:130] > # stream_tls_cert = ""
	I0730 01:17:57.672067  535383 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0730 01:17:57.672073  535383 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0730 01:17:57.672087  535383 command_runner.go:130] > # stream_tls_key = ""
	I0730 01:17:57.672101  535383 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0730 01:17:57.672113  535383 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0730 01:17:57.672132  535383 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0730 01:17:57.672140  535383 command_runner.go:130] > # stream_tls_ca = ""
	I0730 01:17:57.672151  535383 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0730 01:17:57.672158  535383 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0730 01:17:57.672168  535383 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0730 01:17:57.672179  535383 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0730 01:17:57.672192  535383 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0730 01:17:57.672203  535383 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0730 01:17:57.672212  535383 command_runner.go:130] > [crio.runtime]
	I0730 01:17:57.672224  535383 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0730 01:17:57.672235  535383 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0730 01:17:57.672241  535383 command_runner.go:130] > # "nofile=1024:2048"
	I0730 01:17:57.672249  535383 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0730 01:17:57.672258  535383 command_runner.go:130] > # default_ulimits = [
	I0730 01:17:57.672268  535383 command_runner.go:130] > # ]
	I0730 01:17:57.672280  535383 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0730 01:17:57.672289  535383 command_runner.go:130] > # no_pivot = false
	I0730 01:17:57.672301  535383 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0730 01:17:57.672313  535383 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0730 01:17:57.672321  535383 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0730 01:17:57.672329  535383 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0730 01:17:57.672337  535383 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0730 01:17:57.672350  535383 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0730 01:17:57.672360  535383 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0730 01:17:57.672371  535383 command_runner.go:130] > # Cgroup setting for conmon
	I0730 01:17:57.672384  535383 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0730 01:17:57.672392  535383 command_runner.go:130] > conmon_cgroup = "pod"
	I0730 01:17:57.672404  535383 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0730 01:17:57.672411  535383 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0730 01:17:57.672430  535383 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0730 01:17:57.672440  535383 command_runner.go:130] > conmon_env = [
	I0730 01:17:57.672449  535383 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0730 01:17:57.672457  535383 command_runner.go:130] > ]
	I0730 01:17:57.672469  535383 command_runner.go:130] > # Additional environment variables to set for all the
	I0730 01:17:57.672480  535383 command_runner.go:130] > # containers. These are overridden if set in the
	I0730 01:17:57.672490  535383 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0730 01:17:57.672496  535383 command_runner.go:130] > # default_env = [
	I0730 01:17:57.672500  535383 command_runner.go:130] > # ]
	I0730 01:17:57.672509  535383 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0730 01:17:57.672524  535383 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0730 01:17:57.672533  535383 command_runner.go:130] > # selinux = false
	I0730 01:17:57.672546  535383 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0730 01:17:57.672559  535383 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0730 01:17:57.672572  535383 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0730 01:17:57.672578  535383 command_runner.go:130] > # seccomp_profile = ""
	I0730 01:17:57.672584  535383 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0730 01:17:57.672596  535383 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0730 01:17:57.672609  535383 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0730 01:17:57.672620  535383 command_runner.go:130] > # which might increase security.
	I0730 01:17:57.672630  535383 command_runner.go:130] > # This option is currently deprecated,
	I0730 01:17:57.672641  535383 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0730 01:17:57.672652  535383 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0730 01:17:57.672662  535383 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0730 01:17:57.672672  535383 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0730 01:17:57.672688  535383 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0730 01:17:57.672700  535383 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0730 01:17:57.672721  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.672731  535383 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0730 01:17:57.672741  535383 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0730 01:17:57.672751  535383 command_runner.go:130] > # the cgroup blockio controller.
	I0730 01:17:57.672762  535383 command_runner.go:130] > # blockio_config_file = ""
	I0730 01:17:57.672774  535383 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0730 01:17:57.672780  535383 command_runner.go:130] > # blockio parameters.
	I0730 01:17:57.672786  535383 command_runner.go:130] > # blockio_reload = false
	I0730 01:17:57.672798  535383 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0730 01:17:57.672808  535383 command_runner.go:130] > # irqbalance daemon.
	I0730 01:17:57.672820  535383 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0730 01:17:57.672834  535383 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0730 01:17:57.672848  535383 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0730 01:17:57.672860  535383 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0730 01:17:57.672868  535383 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0730 01:17:57.672881  535383 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0730 01:17:57.672893  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.672902  535383 command_runner.go:130] > # rdt_config_file = ""
	I0730 01:17:57.672913  535383 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0730 01:17:57.672922  535383 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0730 01:17:57.672946  535383 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0730 01:17:57.672953  535383 command_runner.go:130] > # separate_pull_cgroup = ""
	I0730 01:17:57.672965  535383 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0730 01:17:57.672978  535383 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0730 01:17:57.672987  535383 command_runner.go:130] > # will be added.
	I0730 01:17:57.672997  535383 command_runner.go:130] > # default_capabilities = [
	I0730 01:17:57.673005  535383 command_runner.go:130] > # 	"CHOWN",
	I0730 01:17:57.673013  535383 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0730 01:17:57.673021  535383 command_runner.go:130] > # 	"FSETID",
	I0730 01:17:57.673029  535383 command_runner.go:130] > # 	"FOWNER",
	I0730 01:17:57.673032  535383 command_runner.go:130] > # 	"SETGID",
	I0730 01:17:57.673039  535383 command_runner.go:130] > # 	"SETUID",
	I0730 01:17:57.673045  535383 command_runner.go:130] > # 	"SETPCAP",
	I0730 01:17:57.673055  535383 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0730 01:17:57.673064  535383 command_runner.go:130] > # 	"KILL",
	I0730 01:17:57.673069  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673088  535383 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0730 01:17:57.673101  535383 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0730 01:17:57.673110  535383 command_runner.go:130] > # add_inheritable_capabilities = false
	I0730 01:17:57.673117  535383 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0730 01:17:57.673127  535383 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0730 01:17:57.673136  535383 command_runner.go:130] > default_sysctls = [
	I0730 01:17:57.673144  535383 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0730 01:17:57.673151  535383 command_runner.go:130] > ]
	I0730 01:17:57.673159  535383 command_runner.go:130] > # List of devices on the host that a
	I0730 01:17:57.673172  535383 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0730 01:17:57.673181  535383 command_runner.go:130] > # allowed_devices = [
	I0730 01:17:57.673191  535383 command_runner.go:130] > # 	"/dev/fuse",
	I0730 01:17:57.673198  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673203  535383 command_runner.go:130] > # List of additional devices. specified as
	I0730 01:17:57.673216  535383 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0730 01:17:57.673228  535383 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0730 01:17:57.673243  535383 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0730 01:17:57.673253  535383 command_runner.go:130] > # additional_devices = [
	I0730 01:17:57.673261  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673272  535383 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0730 01:17:57.673281  535383 command_runner.go:130] > # cdi_spec_dirs = [
	I0730 01:17:57.673287  535383 command_runner.go:130] > # 	"/etc/cdi",
	I0730 01:17:57.673292  535383 command_runner.go:130] > # 	"/var/run/cdi",
	I0730 01:17:57.673300  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673313  535383 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0730 01:17:57.673325  535383 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0730 01:17:57.673334  535383 command_runner.go:130] > # Defaults to false.
	I0730 01:17:57.673344  535383 command_runner.go:130] > # device_ownership_from_security_context = false
	I0730 01:17:57.673357  535383 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0730 01:17:57.673367  535383 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0730 01:17:57.673373  535383 command_runner.go:130] > # hooks_dir = [
	I0730 01:17:57.673380  535383 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0730 01:17:57.673388  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673402  535383 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0730 01:17:57.673414  535383 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0730 01:17:57.673425  535383 command_runner.go:130] > # its default mounts from the following two files:
	I0730 01:17:57.673434  535383 command_runner.go:130] > #
	I0730 01:17:57.673446  535383 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0730 01:17:57.673455  535383 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0730 01:17:57.673465  535383 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0730 01:17:57.673475  535383 command_runner.go:130] > #
	I0730 01:17:57.673486  535383 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0730 01:17:57.673498  535383 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0730 01:17:57.673510  535383 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0730 01:17:57.673520  535383 command_runner.go:130] > #      only add mounts it finds in this file.
	I0730 01:17:57.673528  535383 command_runner.go:130] > #
	I0730 01:17:57.673534  535383 command_runner.go:130] > # default_mounts_file = ""
	I0730 01:17:57.673541  535383 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0730 01:17:57.673549  535383 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0730 01:17:57.673559  535383 command_runner.go:130] > pids_limit = 1024
	I0730 01:17:57.673569  535383 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0730 01:17:57.673581  535383 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0730 01:17:57.673594  535383 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0730 01:17:57.673608  535383 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0730 01:17:57.673617  535383 command_runner.go:130] > # log_size_max = -1
	I0730 01:17:57.673626  535383 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0730 01:17:57.673638  535383 command_runner.go:130] > # log_to_journald = false
	I0730 01:17:57.673651  535383 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0730 01:17:57.673662  535383 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0730 01:17:57.673672  535383 command_runner.go:130] > # Path to directory for container attach sockets.
	I0730 01:17:57.673683  535383 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0730 01:17:57.673694  535383 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0730 01:17:57.673703  535383 command_runner.go:130] > # bind_mount_prefix = ""
	I0730 01:17:57.673710  535383 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0730 01:17:57.673715  535383 command_runner.go:130] > # read_only = false
	I0730 01:17:57.673727  535383 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0730 01:17:57.673740  535383 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0730 01:17:57.673749  535383 command_runner.go:130] > # live configuration reload.
	I0730 01:17:57.673757  535383 command_runner.go:130] > # log_level = "info"
	I0730 01:17:57.673768  535383 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0730 01:17:57.673779  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.673787  535383 command_runner.go:130] > # log_filter = ""
	I0730 01:17:57.673796  535383 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0730 01:17:57.673810  535383 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0730 01:17:57.673820  535383 command_runner.go:130] > # separated by comma.
	I0730 01:17:57.673831  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.673842  535383 command_runner.go:130] > # uid_mappings = ""
	I0730 01:17:57.673854  535383 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0730 01:17:57.673866  535383 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0730 01:17:57.673875  535383 command_runner.go:130] > # separated by comma.
	I0730 01:17:57.673885  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.673893  535383 command_runner.go:130] > # gid_mappings = ""
	I0730 01:17:57.673906  535383 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0730 01:17:57.673919  535383 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0730 01:17:57.673931  535383 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0730 01:17:57.673946  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.673955  535383 command_runner.go:130] > # minimum_mappable_uid = -1
	I0730 01:17:57.673965  535383 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0730 01:17:57.673974  535383 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0730 01:17:57.673986  535383 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0730 01:17:57.674001  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.674013  535383 command_runner.go:130] > # minimum_mappable_gid = -1
	I0730 01:17:57.674025  535383 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0730 01:17:57.674037  535383 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0730 01:17:57.674048  535383 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0730 01:17:57.674054  535383 command_runner.go:130] > # ctr_stop_timeout = 30
	I0730 01:17:57.674062  535383 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0730 01:17:57.674074  535383 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0730 01:17:57.674086  535383 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0730 01:17:57.674097  535383 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0730 01:17:57.674106  535383 command_runner.go:130] > drop_infra_ctr = false
	I0730 01:17:57.674116  535383 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0730 01:17:57.674127  535383 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0730 01:17:57.674138  535383 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0730 01:17:57.674146  535383 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0730 01:17:57.674157  535383 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0730 01:17:57.674170  535383 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0730 01:17:57.674181  535383 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0730 01:17:57.674192  535383 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0730 01:17:57.674202  535383 command_runner.go:130] > # shared_cpuset = ""
	I0730 01:17:57.674214  535383 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0730 01:17:57.674222  535383 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0730 01:17:57.674230  535383 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0730 01:17:57.674244  535383 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0730 01:17:57.674254  535383 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0730 01:17:57.674262  535383 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0730 01:17:57.674275  535383 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0730 01:17:57.674284  535383 command_runner.go:130] > # enable_criu_support = false
	I0730 01:17:57.674294  535383 command_runner.go:130] > # Enable/disable the generation of the container,
	I0730 01:17:57.674304  535383 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0730 01:17:57.674310  535383 command_runner.go:130] > # enable_pod_events = false
	I0730 01:17:57.674320  535383 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0730 01:17:57.674333  535383 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0730 01:17:57.674344  535383 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0730 01:17:57.674354  535383 command_runner.go:130] > # default_runtime = "runc"
	I0730 01:17:57.674365  535383 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0730 01:17:57.674379  535383 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0730 01:17:57.674392  535383 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0730 01:17:57.674405  535383 command_runner.go:130] > # creation as a file is not desired either.
	I0730 01:17:57.674421  535383 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0730 01:17:57.674432  535383 command_runner.go:130] > # the hostname is being managed dynamically.
	I0730 01:17:57.674441  535383 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0730 01:17:57.674450  535383 command_runner.go:130] > # ]
	I0730 01:17:57.674462  535383 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0730 01:17:57.674473  535383 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0730 01:17:57.674482  535383 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0730 01:17:57.674492  535383 command_runner.go:130] > # Each entry in the table should follow the format:
	I0730 01:17:57.674501  535383 command_runner.go:130] > #
	I0730 01:17:57.674512  535383 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0730 01:17:57.674522  535383 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0730 01:17:57.674559  535383 command_runner.go:130] > # runtime_type = "oci"
	I0730 01:17:57.674569  535383 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0730 01:17:57.674576  535383 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0730 01:17:57.674587  535383 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0730 01:17:57.674597  535383 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0730 01:17:57.674606  535383 command_runner.go:130] > # monitor_env = []
	I0730 01:17:57.674616  535383 command_runner.go:130] > # privileged_without_host_devices = false
	I0730 01:17:57.674626  535383 command_runner.go:130] > # allowed_annotations = []
	I0730 01:17:57.674639  535383 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0730 01:17:57.674645  535383 command_runner.go:130] > # Where:
	I0730 01:17:57.674652  535383 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0730 01:17:57.674665  535383 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0730 01:17:57.674678  535383 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0730 01:17:57.674690  535383 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0730 01:17:57.674698  535383 command_runner.go:130] > #   in $PATH.
	I0730 01:17:57.674708  535383 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0730 01:17:57.674719  535383 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0730 01:17:57.674728  535383 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0730 01:17:57.674734  535383 command_runner.go:130] > #   state.
	I0730 01:17:57.674744  535383 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0730 01:17:57.674757  535383 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0730 01:17:57.674771  535383 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0730 01:17:57.674783  535383 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0730 01:17:57.674795  535383 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0730 01:17:57.674808  535383 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0730 01:17:57.674818  535383 command_runner.go:130] > #   The currently recognized values are:
	I0730 01:17:57.674827  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0730 01:17:57.674842  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0730 01:17:57.674853  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0730 01:17:57.674866  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0730 01:17:57.674880  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0730 01:17:57.674893  535383 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0730 01:17:57.674902  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0730 01:17:57.674911  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0730 01:17:57.674924  535383 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0730 01:17:57.674937  535383 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0730 01:17:57.674947  535383 command_runner.go:130] > #   deprecated option "conmon".
	I0730 01:17:57.674960  535383 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0730 01:17:57.674971  535383 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0730 01:17:57.674981  535383 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0730 01:17:57.674989  535383 command_runner.go:130] > #   should be moved to the container's cgroup
	I0730 01:17:57.674999  535383 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0730 01:17:57.675010  535383 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0730 01:17:57.675024  535383 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0730 01:17:57.675036  535383 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0730 01:17:57.675043  535383 command_runner.go:130] > #
	I0730 01:17:57.675054  535383 command_runner.go:130] > # Using the seccomp notifier feature:
	I0730 01:17:57.675061  535383 command_runner.go:130] > #
	I0730 01:17:57.675067  535383 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0730 01:17:57.675078  535383 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0730 01:17:57.675089  535383 command_runner.go:130] > #
	I0730 01:17:57.675101  535383 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0730 01:17:57.675114  535383 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0730 01:17:57.675121  535383 command_runner.go:130] > #
	I0730 01:17:57.675131  535383 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0730 01:17:57.675139  535383 command_runner.go:130] > # feature.
	I0730 01:17:57.675145  535383 command_runner.go:130] > #
	I0730 01:17:57.675154  535383 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0730 01:17:57.675162  535383 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0730 01:17:57.675175  535383 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0730 01:17:57.675190  535383 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0730 01:17:57.675203  535383 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0730 01:17:57.675211  535383 command_runner.go:130] > #
	I0730 01:17:57.675223  535383 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0730 01:17:57.675234  535383 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0730 01:17:57.675239  535383 command_runner.go:130] > #
	I0730 01:17:57.675248  535383 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0730 01:17:57.675260  535383 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0730 01:17:57.675268  535383 command_runner.go:130] > #
	I0730 01:17:57.675277  535383 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0730 01:17:57.675290  535383 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0730 01:17:57.675299  535383 command_runner.go:130] > # limitation.
	I0730 01:17:57.675309  535383 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0730 01:17:57.675317  535383 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0730 01:17:57.675324  535383 command_runner.go:130] > runtime_type = "oci"
	I0730 01:17:57.675329  535383 command_runner.go:130] > runtime_root = "/run/runc"
	I0730 01:17:57.675336  535383 command_runner.go:130] > runtime_config_path = ""
	I0730 01:17:57.675347  535383 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0730 01:17:57.675353  535383 command_runner.go:130] > monitor_cgroup = "pod"
	I0730 01:17:57.675363  535383 command_runner.go:130] > monitor_exec_cgroup = ""
	I0730 01:17:57.675373  535383 command_runner.go:130] > monitor_env = [
	I0730 01:17:57.675385  535383 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0730 01:17:57.675392  535383 command_runner.go:130] > ]
	I0730 01:17:57.675400  535383 command_runner.go:130] > privileged_without_host_devices = false
	I0730 01:17:57.675408  535383 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0730 01:17:57.675418  535383 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0730 01:17:57.675432  535383 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0730 01:17:57.675446  535383 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0730 01:17:57.675462  535383 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0730 01:17:57.675473  535383 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0730 01:17:57.675488  535383 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0730 01:17:57.675499  535383 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0730 01:17:57.675509  535383 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0730 01:17:57.675520  535383 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0730 01:17:57.675526  535383 command_runner.go:130] > # Example:
	I0730 01:17:57.675533  535383 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0730 01:17:57.675541  535383 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0730 01:17:57.675552  535383 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0730 01:17:57.675559  535383 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0730 01:17:57.675565  535383 command_runner.go:130] > # cpuset = 0
	I0730 01:17:57.675571  535383 command_runner.go:130] > # cpushares = "0-1"
	I0730 01:17:57.675574  535383 command_runner.go:130] > # Where:
	I0730 01:17:57.675578  535383 command_runner.go:130] > # The workload name is workload-type.
	I0730 01:17:57.675586  535383 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0730 01:17:57.675595  535383 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0730 01:17:57.675605  535383 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0730 01:17:57.675617  535383 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0730 01:17:57.675625  535383 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0730 01:17:57.675634  535383 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0730 01:17:57.675643  535383 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0730 01:17:57.675650  535383 command_runner.go:130] > # Default value is set to true
	I0730 01:17:57.675657  535383 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0730 01:17:57.675662  535383 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0730 01:17:57.675666  535383 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0730 01:17:57.675670  535383 command_runner.go:130] > # Default value is set to 'false'
	I0730 01:17:57.675674  535383 command_runner.go:130] > # disable_hostport_mapping = false
	I0730 01:17:57.675681  535383 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0730 01:17:57.675683  535383 command_runner.go:130] > #
	I0730 01:17:57.675691  535383 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0730 01:17:57.675701  535383 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0730 01:17:57.675714  535383 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0730 01:17:57.675726  535383 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0730 01:17:57.675738  535383 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0730 01:17:57.675746  535383 command_runner.go:130] > [crio.image]
	I0730 01:17:57.675758  535383 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0730 01:17:57.675765  535383 command_runner.go:130] > # default_transport = "docker://"
	I0730 01:17:57.675771  535383 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0730 01:17:57.675781  535383 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0730 01:17:57.675787  535383 command_runner.go:130] > # global_auth_file = ""
	I0730 01:17:57.675792  535383 command_runner.go:130] > # The image used to instantiate infra containers.
	I0730 01:17:57.675798  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.675803  535383 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0730 01:17:57.675811  535383 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0730 01:17:57.675818  535383 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0730 01:17:57.675825  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.675833  535383 command_runner.go:130] > # pause_image_auth_file = ""
	I0730 01:17:57.675845  535383 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0730 01:17:57.675858  535383 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0730 01:17:57.675871  535383 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0730 01:17:57.675883  535383 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0730 01:17:57.675892  535383 command_runner.go:130] > # pause_command = "/pause"
	I0730 01:17:57.675902  535383 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0730 01:17:57.675911  535383 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0730 01:17:57.675919  535383 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0730 01:17:57.675927  535383 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0730 01:17:57.675934  535383 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0730 01:17:57.675940  535383 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0730 01:17:57.675946  535383 command_runner.go:130] > # pinned_images = [
	I0730 01:17:57.675949  535383 command_runner.go:130] > # ]
	I0730 01:17:57.675957  535383 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0730 01:17:57.675964  535383 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0730 01:17:57.675972  535383 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0730 01:17:57.675981  535383 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0730 01:17:57.675988  535383 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0730 01:17:57.675992  535383 command_runner.go:130] > # signature_policy = ""
	I0730 01:17:57.676000  535383 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0730 01:17:57.676006  535383 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0730 01:17:57.676014  535383 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0730 01:17:57.676022  535383 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0730 01:17:57.676029  535383 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0730 01:17:57.676034  535383 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0730 01:17:57.676041  535383 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0730 01:17:57.676049  535383 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0730 01:17:57.676053  535383 command_runner.go:130] > # changing them here.
	I0730 01:17:57.676062  535383 command_runner.go:130] > # insecure_registries = [
	I0730 01:17:57.676067  535383 command_runner.go:130] > # ]
	I0730 01:17:57.676078  535383 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0730 01:17:57.676089  535383 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0730 01:17:57.676092  535383 command_runner.go:130] > # image_volumes = "mkdir"
	I0730 01:17:57.676099  535383 command_runner.go:130] > # Temporary directory to use for storing big files
	I0730 01:17:57.676103  535383 command_runner.go:130] > # big_files_temporary_dir = ""
	I0730 01:17:57.676112  535383 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0730 01:17:57.676118  535383 command_runner.go:130] > # CNI plugins.
	I0730 01:17:57.676121  535383 command_runner.go:130] > [crio.network]
	I0730 01:17:57.676127  535383 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0730 01:17:57.676134  535383 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0730 01:17:57.676138  535383 command_runner.go:130] > # cni_default_network = ""
	I0730 01:17:57.676143  535383 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0730 01:17:57.676149  535383 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0730 01:17:57.676154  535383 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0730 01:17:57.676158  535383 command_runner.go:130] > # plugin_dirs = [
	I0730 01:17:57.676163  535383 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0730 01:17:57.676166  535383 command_runner.go:130] > # ]
	I0730 01:17:57.676172  535383 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0730 01:17:57.676177  535383 command_runner.go:130] > [crio.metrics]
	I0730 01:17:57.676182  535383 command_runner.go:130] > # Globally enable or disable metrics support.
	I0730 01:17:57.676188  535383 command_runner.go:130] > enable_metrics = true
	I0730 01:17:57.676192  535383 command_runner.go:130] > # Specify enabled metrics collectors.
	I0730 01:17:57.676200  535383 command_runner.go:130] > # Per default all metrics are enabled.
	I0730 01:17:57.676207  535383 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0730 01:17:57.676215  535383 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0730 01:17:57.676221  535383 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0730 01:17:57.676225  535383 command_runner.go:130] > # metrics_collectors = [
	I0730 01:17:57.676230  535383 command_runner.go:130] > # 	"operations",
	I0730 01:17:57.676235  535383 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0730 01:17:57.676241  535383 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0730 01:17:57.676245  535383 command_runner.go:130] > # 	"operations_errors",
	I0730 01:17:57.676251  535383 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0730 01:17:57.676255  535383 command_runner.go:130] > # 	"image_pulls_by_name",
	I0730 01:17:57.676261  535383 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0730 01:17:57.676265  535383 command_runner.go:130] > # 	"image_pulls_failures",
	I0730 01:17:57.676272  535383 command_runner.go:130] > # 	"image_pulls_successes",
	I0730 01:17:57.676276  535383 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0730 01:17:57.676282  535383 command_runner.go:130] > # 	"image_layer_reuse",
	I0730 01:17:57.676287  535383 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0730 01:17:57.676293  535383 command_runner.go:130] > # 	"containers_oom_total",
	I0730 01:17:57.676298  535383 command_runner.go:130] > # 	"containers_oom",
	I0730 01:17:57.676304  535383 command_runner.go:130] > # 	"processes_defunct",
	I0730 01:17:57.676308  535383 command_runner.go:130] > # 	"operations_total",
	I0730 01:17:57.676312  535383 command_runner.go:130] > # 	"operations_latency_seconds",
	I0730 01:17:57.676318  535383 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0730 01:17:57.676323  535383 command_runner.go:130] > # 	"operations_errors_total",
	I0730 01:17:57.676329  535383 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0730 01:17:57.676335  535383 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0730 01:17:57.676341  535383 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0730 01:17:57.676346  535383 command_runner.go:130] > # 	"image_pulls_success_total",
	I0730 01:17:57.676354  535383 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0730 01:17:57.676361  535383 command_runner.go:130] > # 	"containers_oom_count_total",
	I0730 01:17:57.676365  535383 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0730 01:17:57.676373  535383 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0730 01:17:57.676376  535383 command_runner.go:130] > # ]
	I0730 01:17:57.676382  535383 command_runner.go:130] > # The port on which the metrics server will listen.
	I0730 01:17:57.676386  535383 command_runner.go:130] > # metrics_port = 9090
	I0730 01:17:57.676393  535383 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0730 01:17:57.676398  535383 command_runner.go:130] > # metrics_socket = ""
	I0730 01:17:57.676405  535383 command_runner.go:130] > # The certificate for the secure metrics server.
	I0730 01:17:57.676413  535383 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0730 01:17:57.676421  535383 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0730 01:17:57.676428  535383 command_runner.go:130] > # certificate on any modification event.
	I0730 01:17:57.676432  535383 command_runner.go:130] > # metrics_cert = ""
	I0730 01:17:57.676438  535383 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0730 01:17:57.676445  535383 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0730 01:17:57.676454  535383 command_runner.go:130] > # metrics_key = ""
	I0730 01:17:57.676465  535383 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0730 01:17:57.676472  535383 command_runner.go:130] > [crio.tracing]
	I0730 01:17:57.676477  535383 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0730 01:17:57.676483  535383 command_runner.go:130] > # enable_tracing = false
	I0730 01:17:57.676488  535383 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0730 01:17:57.676495  535383 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0730 01:17:57.676504  535383 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0730 01:17:57.676511  535383 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0730 01:17:57.676515  535383 command_runner.go:130] > # CRI-O NRI configuration.
	I0730 01:17:57.676521  535383 command_runner.go:130] > [crio.nri]
	I0730 01:17:57.676525  535383 command_runner.go:130] > # Globally enable or disable NRI.
	I0730 01:17:57.676531  535383 command_runner.go:130] > # enable_nri = false
	I0730 01:17:57.676535  535383 command_runner.go:130] > # NRI socket to listen on.
	I0730 01:17:57.676542  535383 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0730 01:17:57.676546  535383 command_runner.go:130] > # NRI plugin directory to use.
	I0730 01:17:57.676551  535383 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0730 01:17:57.676558  535383 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0730 01:17:57.676563  535383 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0730 01:17:57.676571  535383 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0730 01:17:57.676577  535383 command_runner.go:130] > # nri_disable_connections = false
	I0730 01:17:57.676582  535383 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0730 01:17:57.676589  535383 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0730 01:17:57.676593  535383 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0730 01:17:57.676600  535383 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0730 01:17:57.676606  535383 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0730 01:17:57.676611  535383 command_runner.go:130] > [crio.stats]
	I0730 01:17:57.676620  535383 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0730 01:17:57.676628  535383 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0730 01:17:57.676632  535383 command_runner.go:130] > # stats_collection_period = 0
	I0730 01:17:57.676766  535383 cni.go:84] Creating CNI manager for ""
	I0730 01:17:57.676777  535383 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 01:17:57.676785  535383 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 01:17:57.676808  535383 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-543365 NodeName:multinode-543365 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 01:17:57.676950  535383 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-543365"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 01:17:57.677029  535383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 01:17:57.686783  535383 command_runner.go:130] > kubeadm
	I0730 01:17:57.686802  535383 command_runner.go:130] > kubectl
	I0730 01:17:57.686808  535383 command_runner.go:130] > kubelet
	I0730 01:17:57.686824  535383 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 01:17:57.686885  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 01:17:57.696189  535383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0730 01:17:57.712541  535383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 01:17:57.728618  535383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0730 01:17:57.744676  535383 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0730 01:17:57.748140  535383 command_runner.go:130] > 192.168.39.235	control-plane.minikube.internal
	I0730 01:17:57.748266  535383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:17:57.877568  535383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 01:17:57.892734  535383 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365 for IP: 192.168.39.235
	I0730 01:17:57.892762  535383 certs.go:194] generating shared ca certs ...
	I0730 01:17:57.892783  535383 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:17:57.892972  535383 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 01:17:57.893017  535383 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 01:17:57.893028  535383 certs.go:256] generating profile certs ...
	I0730 01:17:57.893105  535383 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/client.key
	I0730 01:17:57.893157  535383 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.key.a9fe4432
	I0730 01:17:57.893191  535383 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.key
	I0730 01:17:57.893202  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 01:17:57.893214  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 01:17:57.893223  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 01:17:57.893236  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 01:17:57.893248  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 01:17:57.893263  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 01:17:57.893275  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 01:17:57.893288  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 01:17:57.893357  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 01:17:57.893385  535383 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 01:17:57.893395  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 01:17:57.893420  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 01:17:57.893444  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 01:17:57.893465  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 01:17:57.893503  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:17:57.893530  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 01:17:57.893543  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 01:17:57.893556  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:57.894194  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 01:17:57.916198  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 01:17:57.938321  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 01:17:57.960108  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 01:17:57.981855  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0730 01:17:58.004475  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 01:17:58.026459  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 01:17:58.048180  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0730 01:17:58.070122  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 01:17:58.091655  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 01:17:58.113952  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 01:17:58.136529  535383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 01:17:58.152945  535383 ssh_runner.go:195] Run: openssl version
	I0730 01:17:58.158550  535383 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0730 01:17:58.158625  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 01:17:58.168627  535383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.172630  535383 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.172758  535383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.172803  535383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.177789  535383 command_runner.go:130] > 3ec20f2e
	I0730 01:17:58.178012  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 01:17:58.186612  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 01:17:58.196595  535383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.200620  535383 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.200668  535383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.200725  535383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.205720  535383 command_runner.go:130] > b5213941
	I0730 01:17:58.205805  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 01:17:58.214199  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 01:17:58.224332  535383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.246721  535383 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.246913  535383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.246982  535383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.253796  535383 command_runner.go:130] > 51391683
	I0730 01:17:58.253895  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 01:17:58.287608  535383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:17:58.298821  535383 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:17:58.298853  535383 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0730 01:17:58.298859  535383 command_runner.go:130] > Device: 253,1	Inode: 6292011     Links: 1
	I0730 01:17:58.298866  535383 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0730 01:17:58.298872  535383 command_runner.go:130] > Access: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298881  535383 command_runner.go:130] > Modify: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298886  535383 command_runner.go:130] > Change: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298891  535383 command_runner.go:130] >  Birth: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298957  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 01:17:58.306173  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.306333  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 01:17:58.314432  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.314517  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 01:17:58.325378  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.327747  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 01:17:58.334440  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.334522  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 01:17:58.340753  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.340938  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 01:17:58.346103  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.346192  535383 kubeadm.go:392] StartCluster: {Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:17:58.346406  535383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 01:17:58.346504  535383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 01:17:58.404129  535383 command_runner.go:130] > 8612bcafcf544a74843e25073d4a30f22fcd4568b61f9e31feffa3f1ab4a2e10
	I0730 01:17:58.404165  535383 command_runner.go:130] > 6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea
	I0730 01:17:58.404176  535383 command_runner.go:130] > 14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b
	I0730 01:17:58.404185  535383 command_runner.go:130] > e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068
	I0730 01:17:58.404194  535383 command_runner.go:130] > 0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5
	I0730 01:17:58.404202  535383 command_runner.go:130] > 0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce
	I0730 01:17:58.404210  535383 command_runner.go:130] > 1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3
	I0730 01:17:58.404221  535383 command_runner.go:130] > c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08
	I0730 01:17:58.404253  535383 cri.go:89] found id: "8612bcafcf544a74843e25073d4a30f22fcd4568b61f9e31feffa3f1ab4a2e10"
	I0730 01:17:58.404263  535383 cri.go:89] found id: "6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea"
	I0730 01:17:58.404270  535383 cri.go:89] found id: "14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b"
	I0730 01:17:58.404275  535383 cri.go:89] found id: "e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068"
	I0730 01:17:58.404280  535383 cri.go:89] found id: "0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5"
	I0730 01:17:58.404286  535383 cri.go:89] found id: "0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce"
	I0730 01:17:58.404290  535383 cri.go:89] found id: "1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3"
	I0730 01:17:58.404295  535383 cri.go:89] found id: "c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08"
	I0730 01:17:58.404310  535383 cri.go:89] found id: ""
	I0730 01:17:58.404372  535383 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.274358609Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d09c770c-14d9-4751-b992-c96000f8a420 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.275355298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d465dd19-70a4-49aa-b67e-b2ff63b8a4c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.275764259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302386275741875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d465dd19-70a4-49aa-b67e-b2ff63b8a4c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.276433497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e23ffb9-b910-401d-9098-d5bb23270a79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.276500822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e23ffb9-b910-401d-9098-d5bb23270a79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.276851175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722302278426847371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073faf6c75a5cad115463e7508fafe76c793eed97435d89a30f6e7bfcbb529b8,PodSandboxId:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722301958743444567,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea,PodSandboxId:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301906730248851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca9212e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b,PodSandboxId:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722301895185368837,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068,PodSandboxId:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722301891672430680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5,PodSandboxId:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722301871731927891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76
,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce,PodSandboxId:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722301871708454594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3,PodSandboxId:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722301871704339305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08,PodSandboxId:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722301871625577137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map
[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e23ffb9-b910-401d-9098-d5bb23270a79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.315548609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbfd1d22-2cf7-475a-b740-33f4e7b825e4 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.315625103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbfd1d22-2cf7-475a-b740-33f4e7b825e4 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.316422516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66a5ba8b-ddfd-4693-9127-9892e8bbace4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.317170174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302386317140886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66a5ba8b-ddfd-4693-9127-9892e8bbace4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.317671633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97eb5605-90e8-4b53-b028-59410a743ccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.317723553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97eb5605-90e8-4b53-b028-59410a743ccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.318136720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722302278426847371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073faf6c75a5cad115463e7508fafe76c793eed97435d89a30f6e7bfcbb529b8,PodSandboxId:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722301958743444567,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea,PodSandboxId:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301906730248851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca9212e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b,PodSandboxId:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722301895185368837,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068,PodSandboxId:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722301891672430680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5,PodSandboxId:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722301871731927891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76
,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce,PodSandboxId:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722301871708454594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3,PodSandboxId:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722301871704339305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08,PodSandboxId:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722301871625577137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map
[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97eb5605-90e8-4b53-b028-59410a743ccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.348440319Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c153186-e93a-4c57-94d7-9e983ce0c811 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.348730310Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-t9w48,Uid:6e9e683c-04d9-456c-a7d5-206e09d00256,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302317995199154,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:18:11.234261553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&PodSandboxMetadata{Name:kube-proxy-kknjc,Uid:e340baed-b0bc-417f-a3c8-2739cfdc97c4,Namespace:kube-system,Attempt:1,},State:S
ANDBOX_READY,CreatedAt:1722302284246698981,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:30.375318803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-543365,Uid:3e7bbbb2b9fff26b5f93da0f692e3a38,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284243297459,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,tier: control-plane,},Annotations:map[string]str
ing{kubernetes.io/config.hash: 3e7bbbb2b9fff26b5f93da0f692e3a38,kubernetes.io/config.seen: 2024-07-30T01:11:16.791714966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&PodSandboxMetadata{Name:etcd-multinode-543365,Uid:cca80933eb38ca6746fd9fbe9232fa76,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284242224105,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.235:2379,kubernetes.io/config.hash: cca80933eb38ca6746fd9fbe9232fa76,kubernetes.io/config.seen: 2024-07-30T01:11:16.791716103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbb
d5b769ec,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a772f2dd-f657-4eb2-9c29-f612c46c1e6e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284242182405,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\
":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-30T01:11:46.285951132Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-543365,Uid:44a2facdcdb5be9b2ea24038d2e5e2c1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284239551984,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.235:8443,kubernetes.io/config.hash: 44a2facdcdb5be9b2ea24038d2e5e2c1,kubernetes.io/conf
ig.seen: 2024-07-30T01:11:16.791708695Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-543365,Uid:0c8a5915b3d273b95feb8931e355b638,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284230041644,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c8a5915b3d273b95feb8931e355b638,kubernetes.io/config.seen: 2024-07-30T01:11:16.791713691Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&PodSandboxMetadata{Name:kindnet-nhqxm,Uid:60c95f91-4cb1-4f07-a34c-bed380318903,Names
pace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284225795118,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:30.335065133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4lxcw,Uid:1498a653-557e-46df-84a2-a58156bebfe7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302278256706302,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,k8s-app: kube-dns,pod-temp
late-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:46.290124206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0c153186-e93a-4c57-94d7-9e983ce0c811 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.350006643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb2f01d5-b495-4bca-aaff-22101b53abba name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.350081609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb2f01d5-b495-4bca-aaff-22101b53abba name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.350288734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb2f01d5-b495-4bca-aaff-22101b53abba name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.372528670Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a2b8df4-39fb-4eeb-8374-5608e3f57d91 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.372600638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a2b8df4-39fb-4eeb-8374-5608e3f57d91 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.373588270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87d56b8b-959d-4fda-a878-c4c09b196343 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.374116783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302386374080076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87d56b8b-959d-4fda-a878-c4c09b196343 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.374619274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1199537b-2c2c-4864-990d-c578cd018f81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.374692915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1199537b-2c2c-4864-990d-c578cd018f81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:19:46 multinode-543365 crio[2853]: time="2024-07-30 01:19:46.375068283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722302278426847371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073faf6c75a5cad115463e7508fafe76c793eed97435d89a30f6e7bfcbb529b8,PodSandboxId:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722301958743444567,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea,PodSandboxId:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301906730248851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca9212e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b,PodSandboxId:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722301895185368837,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068,PodSandboxId:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722301891672430680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5,PodSandboxId:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722301871731927891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76
,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce,PodSandboxId:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722301871708454594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3,PodSandboxId:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722301871704339305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08,PodSandboxId:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722301871625577137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map
[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1199537b-2c2c-4864-990d-c578cd018f81 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b27c24cd05b05       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   8b8f426addeef       busybox-fc5497c4f-t9w48
	552636b457ec5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   a273cf71b9949       coredns-7db6d8ff4d-4lxcw
	7782ced092804       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   3e9cd17ba880b       kube-proxy-kknjc
	5998f46e07386       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   6974746fb4d98       storage-provisioner
	2d47482944552       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   a9843a16da32f       kindnet-nhqxm
	2308013e18c51       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   3620e80db1ded       kube-scheduler-multinode-543365
	ee4c048a4833a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   50e664b75bda8       etcd-multinode-543365
	06b51cae1ca69       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   0d53fd9b06f69       kube-controller-manager-multinode-543365
	6359129ac77b1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   219ed7a1f7971       kube-apiserver-multinode-543365
	302e6de0ed6c4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   a273cf71b9949       coredns-7db6d8ff4d-4lxcw
	073faf6c75a5c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   f9e4f60f2d5f8       busybox-fc5497c4f-t9w48
	6aa7eb02bfb7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   647b456b957b3       storage-provisioner
	14e9c0b67555e       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   d3fcad9a0e600       kindnet-nhqxm
	e5c812257815d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   2ffcd2964fc06       kube-proxy-kknjc
	0c315fbcec823       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   bb0be5746b3e4       etcd-multinode-543365
	0f8bdfa3ecd41       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   8a331ee826a46       kube-scheduler-multinode-543365
	1a7e2b10c6248       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   b0b923422344e       kube-controller-manager-multinode-543365
	c06510d11072b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   449335c90087a       kube-apiserver-multinode-543365
	
	
	==> coredns [302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59384 - 59809 "HINFO IN 124160274080010694.8473823160395440456. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009936685s
	
	
	==> coredns [552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45898 - 49885 "HINFO IN 387166435644859509.2428656921758690141. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009760093s
	
	
	==> describe nodes <==
	Name:               multinode-543365
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-543365
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=multinode-543365
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T01_11_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 01:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-543365
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:19:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    multinode-543365
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8beae51f009544c29d620bade862ba87
	  System UUID:                8beae51f-0095-44c2-9d62-0bade862ba87
	  Boot ID:                    e722d9d3-6cdf-4e6f-87f3-5bc6618d6fde
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t9w48                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 coredns-7db6d8ff4d-4lxcw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m16s
	  kube-system                 etcd-multinode-543365                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m30s
	  kube-system                 kindnet-nhqxm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m16s
	  kube-system                 kube-apiserver-multinode-543365             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-controller-manager-multinode-543365    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-proxy-kknjc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-scheduler-multinode-543365             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m14s  kube-proxy       
	  Normal  Starting                 98s    kube-proxy       
	  Normal  NodeAllocatableEnforced  8m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s  kubelet          Node multinode-543365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s  kubelet          Node multinode-543365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s  kubelet          Node multinode-543365 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m30s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m17s  node-controller  Node multinode-543365 event: Registered Node multinode-543365 in Controller
	  Normal  NodeReady                8m     kubelet          Node multinode-543365 status is now: NodeReady
	  Normal  Starting                 96s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s    kubelet          Node multinode-543365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s    kubelet          Node multinode-543365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s    kubelet          Node multinode-543365 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s    node-controller  Node multinode-543365 event: Registered Node multinode-543365 in Controller
	
	
	Name:               multinode-543365-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-543365-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=multinode-543365
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T01_18_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 01:18:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-543365-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:19:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:18:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:18:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:18:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-543365-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 00aa7d560b51467a8ea0a1e18e9bb185
	  System UUID:                00aa7d56-0b51-467a-8ea0-a1e18e9bb185
	  Boot ID:                    11986d1f-e015-4837-a51d-871e1666745b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qq57b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kindnet-kbsgw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m32s
	  kube-system                 kube-proxy-xpm28           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m28s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m32s (x2 over 7m32s)  kubelet     Node multinode-543365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s (x2 over 7m32s)  kubelet     Node multinode-543365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s (x2 over 7m32s)  kubelet     Node multinode-543365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m13s                  kubelet     Node multinode-543365-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)      kubelet     Node multinode-543365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)      kubelet     Node multinode-543365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)      kubelet     Node multinode-543365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-543365-m02 status is now: NodeReady
	
	
	Name:               multinode-543365-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-543365-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=multinode-543365
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T01_19_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 01:19:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-543365-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:19:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:19:43 +0000   Tue, 30 Jul 2024 01:19:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:19:43 +0000   Tue, 30 Jul 2024 01:19:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:19:43 +0000   Tue, 30 Jul 2024 01:19:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:19:43 +0000   Tue, 30 Jul 2024 01:19:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    multinode-543365-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fb15361c1e443619b4c17e7044c1eeb
	  System UUID:                8fb15361-c1e4-4361-9b4c-17e7044c1eeb
	  Boot ID:                    0825279d-c188-4f52-bd09-45c59ba038b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-srwdc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m40s
	  kube-system                 kube-proxy-2qw48    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m45s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m41s (x2 over 6m41s)  kubelet     Node multinode-543365-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x2 over 6m41s)  kubelet     Node multinode-543365-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x2 over 6m41s)  kubelet     Node multinode-543365-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m20s                  kubelet     Node multinode-543365-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet     Node multinode-543365-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet     Node multinode-543365-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet     Node multinode-543365-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m31s                  kubelet     Node multinode-543365-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet     Node multinode-543365-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet     Node multinode-543365-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet     Node multinode-543365-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-543365-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.045817] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.157353] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.147732] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.280147] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +3.954713] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.626878] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.056235] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.974926] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.080176] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.619588] systemd-fstab-generator[1458]: Ignoring "noauto" option for root device
	[  +0.127447] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.506502] kauditd_printk_skb: 56 callbacks suppressed
	[Jul30 01:12] kauditd_printk_skb: 14 callbacks suppressed
	[Jul30 01:17] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.164369] systemd-fstab-generator[2783]: Ignoring "noauto" option for root device
	[  +0.162604] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.130906] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.260143] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +3.692329] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[Jul30 01:18] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.674066] systemd-fstab-generator[3798]: Ignoring "noauto" option for root device
	[  +0.090969] kauditd_printk_skb: 62 callbacks suppressed
	[ +11.259037] kauditd_printk_skb: 19 callbacks suppressed
	[  +2.212095] systemd-fstab-generator[3974]: Ignoring "noauto" option for root device
	[ +14.481074] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5] <==
	{"level":"info","ts":"2024-07-30T01:12:14.370556Z","caller":"traceutil/trace.go:171","msg":"trace[1437787926] range","detail":"{range_begin:/registry/minions/multinode-543365-m02; range_end:; response_count:0; response_revision:443; }","duration":"233.93309ms","start":"2024-07-30T01:12:14.13657Z","end":"2024-07-30T01:12:14.370503Z","steps":["trace[1437787926] 'agreement among raft nodes before linearized reading'  (duration: 232.162451ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:14.369606Z","caller":"traceutil/trace.go:171","msg":"trace[264826785] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"179.928507ms","start":"2024-07-30T01:12:14.18967Z","end":"2024-07-30T01:12:14.369598Z","steps":["trace[264826785] 'process raft request'  (duration: 179.696656ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T01:12:22.280131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.277606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-kbsgw\" ","response":"range_response_count:1 size:4929"}
	{"level":"info","ts":"2024-07-30T01:12:22.28025Z","caller":"traceutil/trace.go:171","msg":"trace[1052457535] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-kbsgw; range_end:; response_count:1; response_revision:487; }","duration":"106.409544ms","start":"2024-07-30T01:12:22.173808Z","end":"2024-07-30T01:12:22.280218Z","steps":["trace[1052457535] 'range keys from in-memory index tree'  (duration: 106.17213ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:22.446022Z","caller":"traceutil/trace.go:171","msg":"trace[1404368536] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"103.291558ms","start":"2024-07-30T01:12:22.342715Z","end":"2024-07-30T01:12:22.446007Z","steps":["trace[1404368536] 'read index received'  (duration: 103.062394ms)","trace[1404368536] 'applied index is now lower than readState.Index'  (duration: 228.542µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T01:12:22.44621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.456406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T01:12:22.446788Z","caller":"traceutil/trace.go:171","msg":"trace[2054337537] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:488; }","duration":"104.083102ms","start":"2024-07-30T01:12:22.342691Z","end":"2024-07-30T01:12:22.446775Z","steps":["trace[2054337537] 'agreement among raft nodes before linearized reading'  (duration: 103.462358ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:22.446264Z","caller":"traceutil/trace.go:171","msg":"trace[603993300] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"159.168817ms","start":"2024-07-30T01:12:22.287071Z","end":"2024-07-30T01:12:22.44624Z","steps":["trace[603993300] 'process raft request'  (duration: 158.776558ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:22.667479Z","caller":"traceutil/trace.go:171","msg":"trace[606038577] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"215.218292ms","start":"2024-07-30T01:12:22.452236Z","end":"2024-07-30T01:12:22.667455Z","steps":["trace[606038577] 'process raft request'  (duration: 149.893449ms)","trace[606038577] 'compare'  (duration: 64.983009ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T01:13:06.023952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.046533ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15688448736247668784 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-543365-m03.17e6d887e9e6ec82\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-543365-m03.17e6d887e9e6ec82\" value_size:646 lease:6465076699392892621 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-30T01:13:06.024198Z","caller":"traceutil/trace.go:171","msg":"trace[1750628680] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"232.738495ms","start":"2024-07-30T01:13:05.791445Z","end":"2024-07-30T01:13:06.024183Z","steps":["trace[1750628680] 'process raft request'  (duration: 89.821428ms)","trace[1750628680] 'compare'  (duration: 141.92676ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-30T01:13:06.02427Z","caller":"traceutil/trace.go:171","msg":"trace[458488859] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"146.30831ms","start":"2024-07-30T01:13:05.87795Z","end":"2024-07-30T01:13:06.024258Z","steps":["trace[458488859] 'process raft request'  (duration: 146.090823ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:13:06.024477Z","caller":"traceutil/trace.go:171","msg":"trace[338737904] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"207.159985ms","start":"2024-07-30T01:13:05.817307Z","end":"2024-07-30T01:13:06.024467Z","steps":["trace[338737904] 'read index received'  (duration: 63.966971ms)","trace[338737904] 'applied index is now lower than readState.Index'  (duration: 143.192262ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T01:13:06.024585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.282275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.235\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-30T01:13:06.027627Z","caller":"traceutil/trace.go:171","msg":"trace[1592212921] range","detail":"{range_begin:/registry/masterleases/192.168.39.235; range_end:; response_count:1; response_revision:574; }","duration":"210.342957ms","start":"2024-07-30T01:13:05.817267Z","end":"2024-07-30T01:13:06.02761Z","steps":["trace[1592212921] 'agreement among raft nodes before linearized reading'  (duration: 207.235991ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:16:22.133641Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-30T01:16:22.133758Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-543365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"]}
	{"level":"warn","ts":"2024-07-30T01:16:22.133862Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T01:16:22.134238Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T01:16:22.183709Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.235:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T01:16:22.183805Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.235:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-30T01:16:22.185357Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"feb6ae41040cd9b8","current-leader-member-id":"feb6ae41040cd9b8"}
	{"level":"info","ts":"2024-07-30T01:16:22.18848Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:16:22.188601Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:16:22.188612Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-543365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"]}
	
	
	==> etcd [ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb] <==
	{"level":"info","ts":"2024-07-30T01:18:05.193359Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-30T01:18:05.193369Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-30T01:18:05.193606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 switched to configuration voters=(18354048925659093432)"}
	{"level":"info","ts":"2024-07-30T01:18:05.193674Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","added-peer-id":"feb6ae41040cd9b8","added-peer-peer-urls":["https://192.168.39.235:2380"]}
	{"level":"info","ts":"2024-07-30T01:18:05.193802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T01:18:05.193845Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T01:18:05.198101Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-30T01:18:05.19841Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"feb6ae41040cd9b8","initial-advertise-peer-urls":["https://192.168.39.235:2380"],"listen-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-30T01:18:05.199958Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-30T01:18:05.200182Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:18:05.20392Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:18:06.726953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-30T01:18:06.727011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-30T01:18:06.727048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgPreVoteResp from feb6ae41040cd9b8 at term 2"}
	{"level":"info","ts":"2024-07-30T01:18:06.727063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became candidate at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.727069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgVoteResp from feb6ae41040cd9b8 at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.727088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became leader at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.727097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: feb6ae41040cd9b8 elected leader feb6ae41040cd9b8 at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.732493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"feb6ae41040cd9b8","local-member-attributes":"{Name:multinode-543365 ClientURLs:[https://192.168.39.235:2379]}","request-path":"/0/members/feb6ae41040cd9b8/attributes","cluster-id":"1b3c53dd134e6187","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T01:18:06.732487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:18:06.732991Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:18:06.733526Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T01:18:06.733617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T01:18:06.734797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.235:2379"}
	{"level":"info","ts":"2024-07-30T01:18:06.736504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:19:46 up 9 min,  0 users,  load average: 0.19, 0.14, 0.06
	Linux multinode-543365 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b] <==
	I0730 01:15:36.123530       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:15:46.123353       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:15:46.123611       1 main.go:299] handling current node
	I0730 01:15:46.123684       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:15:46.123704       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:15:46.123999       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:15:46.124033       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:15:56.126770       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:15:56.126934       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:15:56.127124       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:15:56.127160       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:15:56.127237       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:15:56.127257       1 main.go:299] handling current node
	I0730 01:16:06.123982       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:16:06.124030       1 main.go:299] handling current node
	I0730 01:16:06.124063       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:16:06.124069       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:16:06.124231       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:16:06.124254       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:16:16.126105       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:16:16.126189       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:16:16.126382       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:16:16.126404       1 main.go:299] handling current node
	I0730 01:16:16.126420       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:16:16.126425       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902] <==
	I0730 01:19:05.624136       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:19:15.625017       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:19:15.625120       1 main.go:299] handling current node
	I0730 01:19:15.625155       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:19:15.625173       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:19:15.625318       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:19:15.625361       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:19:25.623769       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:19:25.623866       1 main.go:299] handling current node
	I0730 01:19:25.623924       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:19:25.623931       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:19:25.624087       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:19:25.624107       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.2.0/24] 
	I0730 01:19:35.627538       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:19:35.627635       1 main.go:299] handling current node
	I0730 01:19:35.627654       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:19:35.627659       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:19:35.628039       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:19:35.628130       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.2.0/24] 
	I0730 01:19:45.624690       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:19:45.624846       1 main.go:299] handling current node
	I0730 01:19:45.624970       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:19:45.625022       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:19:45.625240       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:19:45.625316       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea] <==
	I0730 01:18:08.034336       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 01:18:08.039734       1 aggregator.go:165] initial CRD sync complete...
	I0730 01:18:08.040001       1 autoregister_controller.go:141] Starting autoregister controller
	I0730 01:18:08.040099       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 01:18:08.093109       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 01:18:08.101771       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 01:18:08.101938       1 policy_source.go:224] refreshing policies
	I0730 01:18:08.109205       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 01:18:08.109241       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 01:18:08.109819       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 01:18:08.114961       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 01:18:08.115015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 01:18:08.115021       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 01:18:08.115709       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0730 01:18:08.119651       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0730 01:18:08.132985       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0730 01:18:08.142286       1 cache.go:39] Caches are synced for autoregister controller
	I0730 01:18:08.918641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0730 01:18:10.803021       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0730 01:18:10.911681       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0730 01:18:10.921819       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0730 01:18:10.985132       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0730 01:18:10.991068       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0730 01:18:21.247514       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 01:18:21.348155       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08] <==
	W0730 01:16:22.151636       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.151685       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.151741       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.151793       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.156572       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.156816       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157178       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157253       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157308       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157360       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157414       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157478       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157537       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157596       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157658       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157704       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157753       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157809       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157860       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158140       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158524       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158586       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158651       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158801       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158864       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42] <==
	I0730 01:18:21.747343       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 01:18:21.795160       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 01:18:21.795250       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0730 01:18:42.354231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.914µs"
	I0730 01:18:43.679151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.993298ms"
	I0730 01:18:43.688537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.262082ms"
	I0730 01:18:43.688669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.284µs"
	I0730 01:18:47.784324       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m02\" does not exist"
	I0730 01:18:47.791577       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m02" podCIDRs=["10.244.1.0/24"]
	I0730 01:18:48.699598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.714µs"
	I0730 01:18:48.711757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.413µs"
	I0730 01:18:48.723852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.433µs"
	I0730 01:18:48.741709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.709µs"
	I0730 01:18:48.750605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.737µs"
	I0730 01:18:48.754040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.744µs"
	I0730 01:19:06.278610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:19:06.298916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.551µs"
	I0730 01:19:06.312454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.171µs"
	I0730 01:19:10.334560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.288858ms"
	I0730 01:19:10.334667       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.632µs"
	I0730 01:19:24.194268       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:19:25.484543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:19:25.484854       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m03\" does not exist"
	I0730 01:19:25.513055       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m03" podCIDRs=["10.244.2.0/24"]
	I0730 01:19:43.526580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	
	
	==> kube-controller-manager [1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3] <==
	I0730 01:12:14.373075       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m02\" does not exist"
	I0730 01:12:14.386123       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m02" podCIDRs=["10.244.1.0/24"]
	I0730 01:12:14.549257       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-543365-m02"
	I0730 01:12:33.137362       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:12:35.706814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.858206ms"
	I0730 01:12:35.727179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.228029ms"
	I0730 01:12:35.740025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.798703ms"
	I0730 01:12:35.740227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.066µs"
	I0730 01:12:39.185072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.656252ms"
	I0730 01:12:39.186126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.755µs"
	I0730 01:12:39.233120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.752452ms"
	I0730 01:12:39.235511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.774µs"
	I0730 01:13:06.026136       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m03\" does not exist"
	I0730 01:13:06.026715       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:13:06.040561       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m03" podCIDRs=["10.244.2.0/24"]
	I0730 01:13:09.571701       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-543365-m03"
	I0730 01:13:26.768602       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m03"
	I0730 01:13:55.487715       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:13:56.426523       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m03\" does not exist"
	I0730 01:13:56.427354       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:13:56.443231       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m03" podCIDRs=["10.244.3.0/24"]
	I0730 01:14:15.886041       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m03"
	I0730 01:14:59.625709       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:14:59.698098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.451409ms"
	I0730 01:14:59.698214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.178µs"
	
	
	==> kube-proxy [7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c] <==
	I0730 01:18:05.576355       1 server_linux.go:69] "Using iptables proxy"
	I0730 01:18:08.092518       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.235"]
	I0730 01:18:08.153927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 01:18:08.153990       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 01:18:08.154012       1 server_linux.go:165] "Using iptables Proxier"
	I0730 01:18:08.156432       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 01:18:08.156784       1 server.go:872] "Version info" version="v1.30.3"
	I0730 01:18:08.156797       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:18:08.158734       1 config.go:192] "Starting service config controller"
	I0730 01:18:08.158795       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 01:18:08.158841       1 config.go:101] "Starting endpoint slice config controller"
	I0730 01:18:08.158859       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 01:18:08.159498       1 config.go:319] "Starting node config controller"
	I0730 01:18:08.159526       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 01:18:08.259851       1 shared_informer.go:320] Caches are synced for node config
	I0730 01:18:08.259953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 01:18:08.260051       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068] <==
	I0730 01:11:31.869943       1 server_linux.go:69] "Using iptables proxy"
	I0730 01:11:31.885455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.235"]
	I0730 01:11:31.916184       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 01:11:31.916279       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 01:11:31.916315       1 server_linux.go:165] "Using iptables Proxier"
	I0730 01:11:31.918541       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 01:11:31.919017       1 server.go:872] "Version info" version="v1.30.3"
	I0730 01:11:31.919046       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:11:31.920994       1 config.go:192] "Starting service config controller"
	I0730 01:11:31.921025       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 01:11:31.921048       1 config.go:101] "Starting endpoint slice config controller"
	I0730 01:11:31.921051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 01:11:31.921517       1 config.go:319] "Starting node config controller"
	I0730 01:11:31.921551       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 01:11:32.021454       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 01:11:32.021549       1 shared_informer.go:320] Caches are synced for service config
	I0730 01:11:32.021568       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce] <==
	E0730 01:11:14.134500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 01:11:14.134601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 01:11:14.134624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 01:11:14.134713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 01:11:14.134735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 01:11:14.135130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 01:11:14.135288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 01:11:14.980795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0730 01:11:14.980842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 01:11:15.015121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0730 01:11:15.015172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0730 01:11:15.033808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 01:11:15.033838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 01:11:15.072417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 01:11:15.072533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 01:11:15.085326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 01:11:15.085485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 01:11:15.156140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 01:11:15.156261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 01:11:15.375186       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 01:11:15.375959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 01:11:15.450014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 01:11:15.450340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0730 01:11:15.726056       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0730 01:16:22.125774       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d] <==
	W0730 01:18:08.030661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.030672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.030729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 01:18:08.030752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 01:18:08.030806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0730 01:18:08.030829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 01:18:08.030931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0730 01:18:08.030954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0730 01:18:08.031013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 01:18:08.031035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 01:18:08.031085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.031108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.031158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0730 01:18:08.031180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0730 01:18:08.031239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 01:18:08.031261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 01:18:08.031319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.031341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.031416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.031438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.031492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 01:18:08.031512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 01:18:08.031567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 01:18:08.031587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0730 01:18:09.004944       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 01:18:10 multinode-543365 kubelet[3805]: I0730 01:18:10.483772    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c8a5915b3d273b95feb8931e355b638-ca-certs\") pod \"kube-controller-manager-multinode-543365\" (UID: \"0c8a5915b3d273b95feb8931e355b638\") " pod="kube-system/kube-controller-manager-multinode-543365"
	Jul 30 01:18:10 multinode-543365 kubelet[3805]: I0730 01:18:10.483798    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0c8a5915b3d273b95feb8931e355b638-flexvolume-dir\") pod \"kube-controller-manager-multinode-543365\" (UID: \"0c8a5915b3d273b95feb8931e355b638\") " pod="kube-system/kube-controller-manager-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.231409    3805 apiserver.go:52] "Watching apiserver"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.234498    3805 topology_manager.go:215] "Topology Admit Handler" podUID="e340baed-b0bc-417f-a3c8-2739cfdc97c4" podNamespace="kube-system" podName="kube-proxy-kknjc"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.234628    3805 topology_manager.go:215] "Topology Admit Handler" podUID="1498a653-557e-46df-84a2-a58156bebfe7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4lxcw"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.234673    3805 topology_manager.go:215] "Topology Admit Handler" podUID="60c95f91-4cb1-4f07-a34c-bed380318903" podNamespace="kube-system" podName="kindnet-nhqxm"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.234726    3805 topology_manager.go:215] "Topology Admit Handler" podUID="a772f2dd-f657-4eb2-9c29-f612c46c1e6e" podNamespace="kube-system" podName="storage-provisioner"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.234764    3805 topology_manager.go:215] "Topology Admit Handler" podUID="6e9e683c-04d9-456c-a7d5-206e09d00256" podNamespace="default" podName="busybox-fc5497c4f-t9w48"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.262652    3805 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.289683    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a772f2dd-f657-4eb2-9c29-f612c46c1e6e-tmp\") pod \"storage-provisioner\" (UID: \"a772f2dd-f657-4eb2-9c29-f612c46c1e6e\") " pod="kube-system/storage-provisioner"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.289943    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e340baed-b0bc-417f-a3c8-2739cfdc97c4-xtables-lock\") pod \"kube-proxy-kknjc\" (UID: \"e340baed-b0bc-417f-a3c8-2739cfdc97c4\") " pod="kube-system/kube-proxy-kknjc"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.290064    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c95f91-4cb1-4f07-a34c-bed380318903-xtables-lock\") pod \"kindnet-nhqxm\" (UID: \"60c95f91-4cb1-4f07-a34c-bed380318903\") " pod="kube-system/kindnet-nhqxm"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.290264    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e340baed-b0bc-417f-a3c8-2739cfdc97c4-lib-modules\") pod \"kube-proxy-kknjc\" (UID: \"e340baed-b0bc-417f-a3c8-2739cfdc97c4\") " pod="kube-system/kube-proxy-kknjc"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.290354    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/60c95f91-4cb1-4f07-a34c-bed380318903-cni-cfg\") pod \"kindnet-nhqxm\" (UID: \"60c95f91-4cb1-4f07-a34c-bed380318903\") " pod="kube-system/kindnet-nhqxm"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.290396    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c95f91-4cb1-4f07-a34c-bed380318903-lib-modules\") pod \"kindnet-nhqxm\" (UID: \"60c95f91-4cb1-4f07-a34c-bed380318903\") " pod="kube-system/kindnet-nhqxm"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.497219    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-multinode-543365\" already exists" pod="kube-system/etcd-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.501499    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-543365\" already exists" pod="kube-system/kube-apiserver-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.504432    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-543365\" already exists" pod="kube-system/kube-controller-manager-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.505691    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-543365\" already exists" pod="kube-system/kube-scheduler-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.535529    3805 scope.go:117] "RemoveContainer" containerID="302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887"
	Jul 30 01:19:10 multinode-543365 kubelet[3805]: E0730 01:19:10.393622    3805 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 01:19:45.944951  536918 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19346-495103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-543365 -n multinode-543365
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-543365 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (329.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 stop
E0730 01:21:10.081120  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-543365 stop: exit status 82 (2m0.468395612s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-543365-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-543365 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-543365 status: exit status 3 (18.871780643s)

                                                
                                                
-- stdout --
	multinode-543365
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-543365-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 01:22:09.489114  537576 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0730 01:22:09.489153  537576 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-543365 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-543365 -n multinode-543365
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-543365 logs -n 25: (1.42690288s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365:/home/docker/cp-test_multinode-543365-m02_multinode-543365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365 sudo cat                                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m02_multinode-543365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03:/home/docker/cp-test_multinode-543365-m02_multinode-543365-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365-m03 sudo cat                                   | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m02_multinode-543365-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp testdata/cp-test.txt                                                | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile929286498/001/cp-test_multinode-543365-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365:/home/docker/cp-test_multinode-543365-m03_multinode-543365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365 sudo cat                                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m03_multinode-543365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt                       | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02:/home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365-m02 sudo cat                                   | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-543365 node stop m03                                                          | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	| node    | multinode-543365 node start                                                             | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-543365                                                                | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:14 UTC |                     |
	| stop    | -p multinode-543365                                                                     | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:14 UTC |                     |
	| start   | -p multinode-543365                                                                     | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:16 UTC | 30 Jul 24 01:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-543365                                                                | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:19 UTC |                     |
	| node    | multinode-543365 node delete                                                            | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:19 UTC | 30 Jul 24 01:19 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-543365 stop                                                                   | multinode-543365 | jenkins | v1.33.1 | 30 Jul 24 01:19 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 01:16:21
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 01:16:21.191281  535383 out.go:291] Setting OutFile to fd 1 ...
	I0730 01:16:21.191396  535383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:16:21.191404  535383 out.go:304] Setting ErrFile to fd 2...
	I0730 01:16:21.191408  535383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:16:21.191608  535383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 01:16:21.192167  535383 out.go:298] Setting JSON to false
	I0730 01:16:21.193163  535383 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10723,"bootTime":1722291458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 01:16:21.193229  535383 start.go:139] virtualization: kvm guest
	I0730 01:16:21.195638  535383 out.go:177] * [multinode-543365] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 01:16:21.197529  535383 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 01:16:21.197555  535383 notify.go:220] Checking for updates...
	I0730 01:16:21.200046  535383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 01:16:21.201440  535383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 01:16:21.202886  535383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:16:21.204270  535383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 01:16:21.205701  535383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 01:16:21.207407  535383 config.go:182] Loaded profile config "multinode-543365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 01:16:21.207503  535383 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 01:16:21.208060  535383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:16:21.208126  535383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:16:21.223599  535383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0730 01:16:21.224177  535383 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:16:21.224758  535383 main.go:141] libmachine: Using API Version  1
	I0730 01:16:21.224777  535383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:16:21.225217  535383 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:16:21.225452  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:16:21.263911  535383 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 01:16:21.265257  535383 start.go:297] selected driver: kvm2
	I0730 01:16:21.265275  535383 start.go:901] validating driver "kvm2" against &{Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:16:21.265433  535383 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 01:16:21.265767  535383 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:16:21.265865  535383 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 01:16:21.281872  535383 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 01:16:21.282576  535383 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 01:16:21.282632  535383 cni.go:84] Creating CNI manager for ""
	I0730 01:16:21.282644  535383 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 01:16:21.282708  535383 start.go:340] cluster config:
	{Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-543365 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:16:21.282860  535383 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:16:21.284523  535383 out.go:177] * Starting "multinode-543365" primary control-plane node in "multinode-543365" cluster
	I0730 01:16:21.285677  535383 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 01:16:21.285720  535383 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 01:16:21.285734  535383 cache.go:56] Caching tarball of preloaded images
	I0730 01:16:21.285830  535383 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 01:16:21.285843  535383 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 01:16:21.285959  535383 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/config.json ...
	I0730 01:16:21.286198  535383 start.go:360] acquireMachinesLock for multinode-543365: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 01:16:21.286249  535383 start.go:364] duration metric: took 29.734µs to acquireMachinesLock for "multinode-543365"
	I0730 01:16:21.286270  535383 start.go:96] Skipping create...Using existing machine configuration
	I0730 01:16:21.286280  535383 fix.go:54] fixHost starting: 
	I0730 01:16:21.286586  535383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:16:21.286626  535383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:16:21.301307  535383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0730 01:16:21.301858  535383 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:16:21.302508  535383 main.go:141] libmachine: Using API Version  1
	I0730 01:16:21.302533  535383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:16:21.302830  535383 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:16:21.303025  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:16:21.303187  535383 main.go:141] libmachine: (multinode-543365) Calling .GetState
	I0730 01:16:21.304788  535383 fix.go:112] recreateIfNeeded on multinode-543365: state=Running err=<nil>
	W0730 01:16:21.304812  535383 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 01:16:21.307737  535383 out.go:177] * Updating the running kvm2 "multinode-543365" VM ...
	I0730 01:16:21.309047  535383 machine.go:94] provisionDockerMachine start ...
	I0730 01:16:21.309078  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:16:21.309309  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.312674  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.313281  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.313326  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.313545  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.313759  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.313921  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.314063  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.314236  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:21.314600  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:21.314614  535383 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 01:16:21.435368  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-543365
	
	I0730 01:16:21.435407  535383 main.go:141] libmachine: (multinode-543365) Calling .GetMachineName
	I0730 01:16:21.435726  535383 buildroot.go:166] provisioning hostname "multinode-543365"
	I0730 01:16:21.435758  535383 main.go:141] libmachine: (multinode-543365) Calling .GetMachineName
	I0730 01:16:21.435961  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.439671  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.440109  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.440140  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.440279  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.440480  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.440656  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.440816  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.441013  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:21.441194  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:21.441208  535383 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-543365 && echo "multinode-543365" | sudo tee /etc/hostname
	I0730 01:16:21.568904  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-543365
	
	I0730 01:16:21.568953  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.572044  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.572529  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.572563  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.572750  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.572952  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.573131  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.573260  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.573427  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:21.573589  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:21.573606  535383 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-543365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-543365/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-543365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 01:16:21.685395  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:16:21.685434  535383 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 01:16:21.685479  535383 buildroot.go:174] setting up certificates
	I0730 01:16:21.685488  535383 provision.go:84] configureAuth start
	I0730 01:16:21.685501  535383 main.go:141] libmachine: (multinode-543365) Calling .GetMachineName
	I0730 01:16:21.685836  535383 main.go:141] libmachine: (multinode-543365) Calling .GetIP
	I0730 01:16:21.688368  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.688815  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.688846  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.689062  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.691451  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.691779  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.691810  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.691943  535383 provision.go:143] copyHostCerts
	I0730 01:16:21.691980  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:16:21.692035  535383 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 01:16:21.692054  535383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:16:21.692139  535383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 01:16:21.692238  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:16:21.692264  535383 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 01:16:21.692271  535383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:16:21.692310  535383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 01:16:21.692383  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:16:21.692405  535383 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 01:16:21.692412  535383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:16:21.692449  535383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 01:16:21.692519  535383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.multinode-543365 san=[127.0.0.1 192.168.39.235 localhost minikube multinode-543365]
	I0730 01:16:21.839675  535383 provision.go:177] copyRemoteCerts
	I0730 01:16:21.839740  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 01:16:21.839768  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:21.842411  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.842822  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:21.842850  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:21.843044  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:21.843240  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:21.843434  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:21.843590  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:16:21.926837  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 01:16:21.926922  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 01:16:21.953564  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 01:16:21.953647  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0730 01:16:21.976844  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 01:16:21.976925  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0730 01:16:22.000511  535383 provision.go:87] duration metric: took 315.009219ms to configureAuth
	I0730 01:16:22.000542  535383 buildroot.go:189] setting minikube options for container-runtime
	I0730 01:16:22.000767  535383 config.go:182] Loaded profile config "multinode-543365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 01:16:22.000843  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:16:22.003709  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:22.004159  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:16:22.004193  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:16:22.004391  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:16:22.004589  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:22.004770  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:16:22.004902  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:16:22.005074  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:16:22.005228  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:16:22.005245  535383 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 01:17:52.701740  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 01:17:52.701780  535383 machine.go:97] duration metric: took 1m31.392713504s to provisionDockerMachine
	I0730 01:17:52.701798  535383 start.go:293] postStartSetup for "multinode-543365" (driver="kvm2")
	I0730 01:17:52.701814  535383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 01:17:52.701845  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.702350  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 01:17:52.702391  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.706505  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.709429  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.709464  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.709694  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.710064  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.710255  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.710459  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:17:52.804508  535383 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 01:17:52.809498  535383 command_runner.go:130] > NAME=Buildroot
	I0730 01:17:52.809517  535383 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0730 01:17:52.809523  535383 command_runner.go:130] > ID=buildroot
	I0730 01:17:52.809529  535383 command_runner.go:130] > VERSION_ID=2023.02.9
	I0730 01:17:52.809536  535383 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0730 01:17:52.809628  535383 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 01:17:52.809648  535383 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 01:17:52.809695  535383 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 01:17:52.809775  535383 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 01:17:52.809803  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /etc/ssl/certs/5023842.pem
	I0730 01:17:52.809900  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 01:17:52.823698  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:17:52.849407  535383 start.go:296] duration metric: took 147.590667ms for postStartSetup
	I0730 01:17:52.849460  535383 fix.go:56] duration metric: took 1m31.563180244s for fixHost
	I0730 01:17:52.849495  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.852582  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.853078  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.853109  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.853295  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.853482  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.853639  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.853811  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.854043  535383 main.go:141] libmachine: Using SSH client type: native
	I0730 01:17:52.854209  535383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0730 01:17:52.854221  535383 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 01:17:52.965408  535383 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722302272.942958518
	
	I0730 01:17:52.965446  535383 fix.go:216] guest clock: 1722302272.942958518
	I0730 01:17:52.965455  535383 fix.go:229] Guest: 2024-07-30 01:17:52.942958518 +0000 UTC Remote: 2024-07-30 01:17:52.849472098 +0000 UTC m=+91.694556362 (delta=93.48642ms)
	I0730 01:17:52.965480  535383 fix.go:200] guest clock delta is within tolerance: 93.48642ms
	I0730 01:17:52.965487  535383 start.go:83] releasing machines lock for "multinode-543365", held for 1m31.679225146s
	I0730 01:17:52.965513  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.965746  535383 main.go:141] libmachine: (multinode-543365) Calling .GetIP
	I0730 01:17:52.968413  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.968802  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.968837  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.969003  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.969618  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.969811  535383 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:17:52.969927  535383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 01:17:52.969969  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.970035  535383 ssh_runner.go:195] Run: cat /version.json
	I0730 01:17:52.970069  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:17:52.972727  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.972895  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.973231  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.973261  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.973415  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.973562  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:52.973582  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.973586  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:52.973748  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.973799  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:17:52.973894  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:17:52.974002  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:17:52.974147  535383 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:17:52.974307  535383 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:17:53.080047  535383 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0730 01:17:53.080761  535383 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0730 01:17:53.080941  535383 ssh_runner.go:195] Run: systemctl --version
	I0730 01:17:53.086625  535383 command_runner.go:130] > systemd 252 (252)
	I0730 01:17:53.086667  535383 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0730 01:17:53.086715  535383 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 01:17:53.244970  535383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0730 01:17:53.253677  535383 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0730 01:17:53.253891  535383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 01:17:53.253955  535383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 01:17:53.264113  535383 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 01:17:53.264141  535383 start.go:495] detecting cgroup driver to use...
	I0730 01:17:53.264209  535383 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 01:17:53.281969  535383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 01:17:53.295630  535383 docker.go:217] disabling cri-docker service (if available) ...
	I0730 01:17:53.295708  535383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 01:17:53.310774  535383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 01:17:53.325215  535383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 01:17:53.496754  535383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 01:17:53.634308  535383 docker.go:233] disabling docker service ...
	I0730 01:17:53.634388  535383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 01:17:53.649661  535383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 01:17:53.663051  535383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 01:17:53.800061  535383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 01:17:53.934742  535383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 01:17:53.948670  535383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 01:17:53.967200  535383 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0730 01:17:53.967243  535383 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 01:17:53.967296  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:53.977455  535383 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 01:17:53.977514  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:53.987231  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:53.996764  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.006426  535383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 01:17:54.016837  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.026449  535383 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.036535  535383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:17:54.046066  535383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 01:17:54.054487  535383 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0730 01:17:54.054580  535383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 01:17:54.063160  535383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:17:54.194201  535383 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 01:17:57.428759  535383 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.234509364s)
	I0730 01:17:57.428802  535383 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 01:17:57.428861  535383 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 01:17:57.433794  535383 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0730 01:17:57.433820  535383 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0730 01:17:57.433831  535383 command_runner.go:130] > Device: 0,22	Inode: 1346        Links: 1
	I0730 01:17:57.433839  535383 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0730 01:17:57.433847  535383 command_runner.go:130] > Access: 2024-07-30 01:17:57.306296141 +0000
	I0730 01:17:57.433856  535383 command_runner.go:130] > Modify: 2024-07-30 01:17:57.306296141 +0000
	I0730 01:17:57.433870  535383 command_runner.go:130] > Change: 2024-07-30 01:17:57.306296141 +0000
	I0730 01:17:57.433879  535383 command_runner.go:130] >  Birth: -
	I0730 01:17:57.433903  535383 start.go:563] Will wait 60s for crictl version
	I0730 01:17:57.433950  535383 ssh_runner.go:195] Run: which crictl
	I0730 01:17:57.437289  535383 command_runner.go:130] > /usr/bin/crictl
	I0730 01:17:57.437400  535383 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 01:17:57.479036  535383 command_runner.go:130] > Version:  0.1.0
	I0730 01:17:57.479064  535383 command_runner.go:130] > RuntimeName:  cri-o
	I0730 01:17:57.479070  535383 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0730 01:17:57.479078  535383 command_runner.go:130] > RuntimeApiVersion:  v1
	I0730 01:17:57.479144  535383 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 01:17:57.479263  535383 ssh_runner.go:195] Run: crio --version
	I0730 01:17:57.504577  535383 command_runner.go:130] > crio version 1.29.1
	I0730 01:17:57.504607  535383 command_runner.go:130] > Version:        1.29.1
	I0730 01:17:57.504615  535383 command_runner.go:130] > GitCommit:      unknown
	I0730 01:17:57.504621  535383 command_runner.go:130] > GitCommitDate:  unknown
	I0730 01:17:57.504627  535383 command_runner.go:130] > GitTreeState:   clean
	I0730 01:17:57.504641  535383 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0730 01:17:57.504649  535383 command_runner.go:130] > GoVersion:      go1.21.6
	I0730 01:17:57.504655  535383 command_runner.go:130] > Compiler:       gc
	I0730 01:17:57.504662  535383 command_runner.go:130] > Platform:       linux/amd64
	I0730 01:17:57.504669  535383 command_runner.go:130] > Linkmode:       dynamic
	I0730 01:17:57.504673  535383 command_runner.go:130] > BuildTags:      
	I0730 01:17:57.504678  535383 command_runner.go:130] >   containers_image_ostree_stub
	I0730 01:17:57.504682  535383 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0730 01:17:57.504686  535383 command_runner.go:130] >   btrfs_noversion
	I0730 01:17:57.504691  535383 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0730 01:17:57.504700  535383 command_runner.go:130] >   libdm_no_deferred_remove
	I0730 01:17:57.504715  535383 command_runner.go:130] >   seccomp
	I0730 01:17:57.504724  535383 command_runner.go:130] > LDFlags:          unknown
	I0730 01:17:57.504731  535383 command_runner.go:130] > SeccompEnabled:   true
	I0730 01:17:57.504737  535383 command_runner.go:130] > AppArmorEnabled:  false
	I0730 01:17:57.505825  535383 ssh_runner.go:195] Run: crio --version
	I0730 01:17:57.531848  535383 command_runner.go:130] > crio version 1.29.1
	I0730 01:17:57.531872  535383 command_runner.go:130] > Version:        1.29.1
	I0730 01:17:57.531879  535383 command_runner.go:130] > GitCommit:      unknown
	I0730 01:17:57.531882  535383 command_runner.go:130] > GitCommitDate:  unknown
	I0730 01:17:57.531886  535383 command_runner.go:130] > GitTreeState:   clean
	I0730 01:17:57.531892  535383 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0730 01:17:57.531896  535383 command_runner.go:130] > GoVersion:      go1.21.6
	I0730 01:17:57.531900  535383 command_runner.go:130] > Compiler:       gc
	I0730 01:17:57.531908  535383 command_runner.go:130] > Platform:       linux/amd64
	I0730 01:17:57.531911  535383 command_runner.go:130] > Linkmode:       dynamic
	I0730 01:17:57.531921  535383 command_runner.go:130] > BuildTags:      
	I0730 01:17:57.531926  535383 command_runner.go:130] >   containers_image_ostree_stub
	I0730 01:17:57.531930  535383 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0730 01:17:57.531933  535383 command_runner.go:130] >   btrfs_noversion
	I0730 01:17:57.531937  535383 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0730 01:17:57.531941  535383 command_runner.go:130] >   libdm_no_deferred_remove
	I0730 01:17:57.531945  535383 command_runner.go:130] >   seccomp
	I0730 01:17:57.531949  535383 command_runner.go:130] > LDFlags:          unknown
	I0730 01:17:57.531953  535383 command_runner.go:130] > SeccompEnabled:   true
	I0730 01:17:57.531959  535383 command_runner.go:130] > AppArmorEnabled:  false
	I0730 01:17:57.534871  535383 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0730 01:17:57.536180  535383 main.go:141] libmachine: (multinode-543365) Calling .GetIP
	I0730 01:17:57.538600  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:57.538954  535383 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:17:57.538982  535383 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:17:57.539164  535383 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 01:17:57.543178  535383 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0730 01:17:57.543305  535383 kubeadm.go:883] updating cluster {Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 01:17:57.543477  535383 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 01:17:57.543536  535383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:17:57.592034  535383 command_runner.go:130] > {
	I0730 01:17:57.592065  535383 command_runner.go:130] >   "images": [
	I0730 01:17:57.592072  535383 command_runner.go:130] >     {
	I0730 01:17:57.592083  535383 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0730 01:17:57.592090  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592099  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0730 01:17:57.592104  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592110  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592124  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0730 01:17:57.592142  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0730 01:17:57.592150  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592158  535383 command_runner.go:130] >       "size": "87165492",
	I0730 01:17:57.592167  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592173  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592188  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592197  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592205  535383 command_runner.go:130] >     },
	I0730 01:17:57.592211  535383 command_runner.go:130] >     {
	I0730 01:17:57.592223  535383 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0730 01:17:57.592233  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592244  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0730 01:17:57.592253  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592262  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592276  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0730 01:17:57.592289  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0730 01:17:57.592297  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592306  535383 command_runner.go:130] >       "size": "87174707",
	I0730 01:17:57.592312  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592325  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592331  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592335  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592340  535383 command_runner.go:130] >     },
	I0730 01:17:57.592343  535383 command_runner.go:130] >     {
	I0730 01:17:57.592352  535383 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0730 01:17:57.592358  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592363  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0730 01:17:57.592369  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592375  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592383  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0730 01:17:57.592393  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0730 01:17:57.592399  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592405  535383 command_runner.go:130] >       "size": "1363676",
	I0730 01:17:57.592411  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592416  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592422  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592427  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592432  535383 command_runner.go:130] >     },
	I0730 01:17:57.592436  535383 command_runner.go:130] >     {
	I0730 01:17:57.592444  535383 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0730 01:17:57.592450  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592457  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0730 01:17:57.592460  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592464  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592472  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0730 01:17:57.592485  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0730 01:17:57.592491  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592495  535383 command_runner.go:130] >       "size": "31470524",
	I0730 01:17:57.592501  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592505  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592511  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592515  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592521  535383 command_runner.go:130] >     },
	I0730 01:17:57.592531  535383 command_runner.go:130] >     {
	I0730 01:17:57.592539  535383 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0730 01:17:57.592545  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592550  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0730 01:17:57.592556  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592560  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592570  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0730 01:17:57.592579  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0730 01:17:57.592585  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592589  535383 command_runner.go:130] >       "size": "61245718",
	I0730 01:17:57.592595  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592600  535383 command_runner.go:130] >       "username": "nonroot",
	I0730 01:17:57.592606  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592610  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592615  535383 command_runner.go:130] >     },
	I0730 01:17:57.592619  535383 command_runner.go:130] >     {
	I0730 01:17:57.592624  535383 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0730 01:17:57.592630  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592635  535383 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0730 01:17:57.592641  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592645  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592654  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0730 01:17:57.592663  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0730 01:17:57.592668  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592672  535383 command_runner.go:130] >       "size": "150779692",
	I0730 01:17:57.592677  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.592681  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.592687  535383 command_runner.go:130] >       },
	I0730 01:17:57.592690  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592696  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592700  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592722  535383 command_runner.go:130] >     },
	I0730 01:17:57.592730  535383 command_runner.go:130] >     {
	I0730 01:17:57.592740  535383 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0730 01:17:57.592747  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592752  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0730 01:17:57.592758  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592762  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592771  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0730 01:17:57.592781  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0730 01:17:57.592784  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592790  535383 command_runner.go:130] >       "size": "117609954",
	I0730 01:17:57.592794  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.592800  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.592803  535383 command_runner.go:130] >       },
	I0730 01:17:57.592809  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592814  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592819  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592823  535383 command_runner.go:130] >     },
	I0730 01:17:57.592828  535383 command_runner.go:130] >     {
	I0730 01:17:57.592834  535383 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0730 01:17:57.592840  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592846  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0730 01:17:57.592851  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592855  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592871  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0730 01:17:57.592881  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0730 01:17:57.592887  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592891  535383 command_runner.go:130] >       "size": "112198984",
	I0730 01:17:57.592897  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.592901  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.592904  535383 command_runner.go:130] >       },
	I0730 01:17:57.592908  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592911  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592915  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592918  535383 command_runner.go:130] >     },
	I0730 01:17:57.592921  535383 command_runner.go:130] >     {
	I0730 01:17:57.592927  535383 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0730 01:17:57.592930  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.592935  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0730 01:17:57.592938  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592942  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.592959  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0730 01:17:57.592966  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0730 01:17:57.592969  535383 command_runner.go:130] >       ],
	I0730 01:17:57.592973  535383 command_runner.go:130] >       "size": "85953945",
	I0730 01:17:57.592976  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.592980  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.592983  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.592986  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.592989  535383 command_runner.go:130] >     },
	I0730 01:17:57.592992  535383 command_runner.go:130] >     {
	I0730 01:17:57.592997  535383 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0730 01:17:57.593001  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.593006  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0730 01:17:57.593009  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593012  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.593019  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0730 01:17:57.593025  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0730 01:17:57.593028  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593032  535383 command_runner.go:130] >       "size": "63051080",
	I0730 01:17:57.593036  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.593040  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.593044  535383 command_runner.go:130] >       },
	I0730 01:17:57.593048  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.593052  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.593057  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.593063  535383 command_runner.go:130] >     },
	I0730 01:17:57.593071  535383 command_runner.go:130] >     {
	I0730 01:17:57.593080  535383 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0730 01:17:57.593089  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.593099  535383 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0730 01:17:57.593104  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593113  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.593127  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0730 01:17:57.593137  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0730 01:17:57.593143  535383 command_runner.go:130] >       ],
	I0730 01:17:57.593147  535383 command_runner.go:130] >       "size": "750414",
	I0730 01:17:57.593153  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.593157  535383 command_runner.go:130] >         "value": "65535"
	I0730 01:17:57.593162  535383 command_runner.go:130] >       },
	I0730 01:17:57.593167  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.593173  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.593177  535383 command_runner.go:130] >       "pinned": true
	I0730 01:17:57.593182  535383 command_runner.go:130] >     }
	I0730 01:17:57.593185  535383 command_runner.go:130] >   ]
	I0730 01:17:57.593188  535383 command_runner.go:130] > }
	I0730 01:17:57.593379  535383 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 01:17:57.593391  535383 crio.go:433] Images already preloaded, skipping extraction
	I0730 01:17:57.593445  535383 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:17:57.623888  535383 command_runner.go:130] > {
	I0730 01:17:57.623916  535383 command_runner.go:130] >   "images": [
	I0730 01:17:57.623923  535383 command_runner.go:130] >     {
	I0730 01:17:57.623936  535383 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0730 01:17:57.623943  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.623951  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0730 01:17:57.623957  535383 command_runner.go:130] >       ],
	I0730 01:17:57.623965  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.623982  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0730 01:17:57.623995  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0730 01:17:57.624004  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624014  535383 command_runner.go:130] >       "size": "87165492",
	I0730 01:17:57.624023  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624032  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624043  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624052  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624058  535383 command_runner.go:130] >     },
	I0730 01:17:57.624066  535383 command_runner.go:130] >     {
	I0730 01:17:57.624080  535383 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0730 01:17:57.624088  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624099  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0730 01:17:57.624107  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624113  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624127  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0730 01:17:57.624142  535383 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0730 01:17:57.624151  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624160  535383 command_runner.go:130] >       "size": "87174707",
	I0730 01:17:57.624169  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624186  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624203  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624215  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624223  535383 command_runner.go:130] >     },
	I0730 01:17:57.624232  535383 command_runner.go:130] >     {
	I0730 01:17:57.624242  535383 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0730 01:17:57.624251  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624262  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0730 01:17:57.624271  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624280  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624296  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0730 01:17:57.624310  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0730 01:17:57.624324  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624334  535383 command_runner.go:130] >       "size": "1363676",
	I0730 01:17:57.624343  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624352  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624365  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624374  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624382  535383 command_runner.go:130] >     },
	I0730 01:17:57.624391  535383 command_runner.go:130] >     {
	I0730 01:17:57.624401  535383 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0730 01:17:57.624410  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624420  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0730 01:17:57.624426  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624430  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624440  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0730 01:17:57.624457  535383 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0730 01:17:57.624463  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624467  535383 command_runner.go:130] >       "size": "31470524",
	I0730 01:17:57.624473  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624477  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624483  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624487  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624492  535383 command_runner.go:130] >     },
	I0730 01:17:57.624496  535383 command_runner.go:130] >     {
	I0730 01:17:57.624502  535383 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0730 01:17:57.624508  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624519  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0730 01:17:57.624525  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624529  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624538  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0730 01:17:57.624547  535383 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0730 01:17:57.624553  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624557  535383 command_runner.go:130] >       "size": "61245718",
	I0730 01:17:57.624563  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.624567  535383 command_runner.go:130] >       "username": "nonroot",
	I0730 01:17:57.624571  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624577  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624580  535383 command_runner.go:130] >     },
	I0730 01:17:57.624585  535383 command_runner.go:130] >     {
	I0730 01:17:57.624591  535383 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0730 01:17:57.624597  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624602  535383 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0730 01:17:57.624607  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624611  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624620  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0730 01:17:57.624629  535383 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0730 01:17:57.624634  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624638  535383 command_runner.go:130] >       "size": "150779692",
	I0730 01:17:57.624644  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.624648  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.624655  535383 command_runner.go:130] >       },
	I0730 01:17:57.624661  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624665  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624669  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624674  535383 command_runner.go:130] >     },
	I0730 01:17:57.624677  535383 command_runner.go:130] >     {
	I0730 01:17:57.624683  535383 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0730 01:17:57.624689  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624694  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0730 01:17:57.624700  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624714  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624729  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0730 01:17:57.624750  535383 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0730 01:17:57.624757  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624761  535383 command_runner.go:130] >       "size": "117609954",
	I0730 01:17:57.624767  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.624770  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.624778  535383 command_runner.go:130] >       },
	I0730 01:17:57.624787  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624796  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624805  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624813  535383 command_runner.go:130] >     },
	I0730 01:17:57.624817  535383 command_runner.go:130] >     {
	I0730 01:17:57.624828  535383 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0730 01:17:57.624836  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.624847  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0730 01:17:57.624854  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624861  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.624894  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0730 01:17:57.624911  535383 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0730 01:17:57.624916  535383 command_runner.go:130] >       ],
	I0730 01:17:57.624925  535383 command_runner.go:130] >       "size": "112198984",
	I0730 01:17:57.624933  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.624941  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.624946  535383 command_runner.go:130] >       },
	I0730 01:17:57.624953  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.624959  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.624968  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.624973  535383 command_runner.go:130] >     },
	I0730 01:17:57.624980  535383 command_runner.go:130] >     {
	I0730 01:17:57.624990  535383 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0730 01:17:57.624998  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.625007  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0730 01:17:57.625015  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625022  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.625036  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0730 01:17:57.625056  535383 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0730 01:17:57.625061  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625079  535383 command_runner.go:130] >       "size": "85953945",
	I0730 01:17:57.625089  535383 command_runner.go:130] >       "uid": null,
	I0730 01:17:57.625095  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.625103  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.625109  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.625117  535383 command_runner.go:130] >     },
	I0730 01:17:57.625122  535383 command_runner.go:130] >     {
	I0730 01:17:57.625135  535383 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0730 01:17:57.625143  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.625153  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0730 01:17:57.625161  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625167  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.625181  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0730 01:17:57.625195  535383 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0730 01:17:57.625204  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625212  535383 command_runner.go:130] >       "size": "63051080",
	I0730 01:17:57.625220  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.625230  535383 command_runner.go:130] >         "value": "0"
	I0730 01:17:57.625235  535383 command_runner.go:130] >       },
	I0730 01:17:57.625245  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.625254  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.625263  535383 command_runner.go:130] >       "pinned": false
	I0730 01:17:57.625272  535383 command_runner.go:130] >     },
	I0730 01:17:57.625279  535383 command_runner.go:130] >     {
	I0730 01:17:57.625286  535383 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0730 01:17:57.625292  535383 command_runner.go:130] >       "repoTags": [
	I0730 01:17:57.625297  535383 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0730 01:17:57.625302  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625307  535383 command_runner.go:130] >       "repoDigests": [
	I0730 01:17:57.625321  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0730 01:17:57.625330  535383 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0730 01:17:57.625335  535383 command_runner.go:130] >       ],
	I0730 01:17:57.625340  535383 command_runner.go:130] >       "size": "750414",
	I0730 01:17:57.625345  535383 command_runner.go:130] >       "uid": {
	I0730 01:17:57.625350  535383 command_runner.go:130] >         "value": "65535"
	I0730 01:17:57.625356  535383 command_runner.go:130] >       },
	I0730 01:17:57.625367  535383 command_runner.go:130] >       "username": "",
	I0730 01:17:57.625374  535383 command_runner.go:130] >       "spec": null,
	I0730 01:17:57.625378  535383 command_runner.go:130] >       "pinned": true
	I0730 01:17:57.625383  535383 command_runner.go:130] >     }
	I0730 01:17:57.625386  535383 command_runner.go:130] >   ]
	I0730 01:17:57.625392  535383 command_runner.go:130] > }
	I0730 01:17:57.625525  535383 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 01:17:57.625537  535383 cache_images.go:84] Images are preloaded, skipping loading
	I0730 01:17:57.625544  535383 kubeadm.go:934] updating node { 192.168.39.235 8443 v1.30.3 crio true true} ...
	I0730 01:17:57.625658  535383 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-543365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 01:17:57.625727  535383 ssh_runner.go:195] Run: crio config
	I0730 01:17:57.657701  535383 command_runner.go:130] ! time="2024-07-30 01:17:57.635208127Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0730 01:17:57.663983  535383 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0730 01:17:57.671475  535383 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0730 01:17:57.671499  535383 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0730 01:17:57.671505  535383 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0730 01:17:57.671509  535383 command_runner.go:130] > #
	I0730 01:17:57.671515  535383 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0730 01:17:57.671521  535383 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0730 01:17:57.671527  535383 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0730 01:17:57.671534  535383 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0730 01:17:57.671537  535383 command_runner.go:130] > # reload'.
	I0730 01:17:57.671543  535383 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0730 01:17:57.671548  535383 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0730 01:17:57.671554  535383 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0730 01:17:57.671562  535383 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0730 01:17:57.671571  535383 command_runner.go:130] > [crio]
	I0730 01:17:57.671580  535383 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0730 01:17:57.671587  535383 command_runner.go:130] > # containers images, in this directory.
	I0730 01:17:57.671597  535383 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0730 01:17:57.671607  535383 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0730 01:17:57.671615  535383 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0730 01:17:57.671622  535383 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0730 01:17:57.671626  535383 command_runner.go:130] > # imagestore = ""
	I0730 01:17:57.671632  535383 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0730 01:17:57.671642  535383 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0730 01:17:57.671646  535383 command_runner.go:130] > storage_driver = "overlay"
	I0730 01:17:57.671659  535383 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0730 01:17:57.671672  535383 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0730 01:17:57.671686  535383 command_runner.go:130] > storage_option = [
	I0730 01:17:57.671693  535383 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0730 01:17:57.671697  535383 command_runner.go:130] > ]
	I0730 01:17:57.671705  535383 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0730 01:17:57.671714  535383 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0730 01:17:57.671720  535383 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0730 01:17:57.671726  535383 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0730 01:17:57.671733  535383 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0730 01:17:57.671740  535383 command_runner.go:130] > # always happen on a node reboot
	I0730 01:17:57.671751  535383 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0730 01:17:57.671769  535383 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0730 01:17:57.671781  535383 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0730 01:17:57.671788  535383 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0730 01:17:57.671793  535383 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0730 01:17:57.671802  535383 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0730 01:17:57.671811  535383 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0730 01:17:57.671817  535383 command_runner.go:130] > # internal_wipe = true
	I0730 01:17:57.671828  535383 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0730 01:17:57.671840  535383 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0730 01:17:57.671850  535383 command_runner.go:130] > # internal_repair = false
	I0730 01:17:57.671861  535383 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0730 01:17:57.671874  535383 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0730 01:17:57.671884  535383 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0730 01:17:57.671891  535383 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0730 01:17:57.671897  535383 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0730 01:17:57.671902  535383 command_runner.go:130] > [crio.api]
	I0730 01:17:57.671910  535383 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0730 01:17:57.671920  535383 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0730 01:17:57.671931  535383 command_runner.go:130] > # IP address on which the stream server will listen.
	I0730 01:17:57.671941  535383 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0730 01:17:57.671954  535383 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0730 01:17:57.671964  535383 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0730 01:17:57.671973  535383 command_runner.go:130] > # stream_port = "0"
	I0730 01:17:57.671982  535383 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0730 01:17:57.671990  535383 command_runner.go:130] > # stream_enable_tls = false
	I0730 01:17:57.671999  535383 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0730 01:17:57.672010  535383 command_runner.go:130] > # stream_idle_timeout = ""
	I0730 01:17:57.672025  535383 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0730 01:17:57.672038  535383 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0730 01:17:57.672047  535383 command_runner.go:130] > # minutes.
	I0730 01:17:57.672057  535383 command_runner.go:130] > # stream_tls_cert = ""
	I0730 01:17:57.672067  535383 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0730 01:17:57.672073  535383 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0730 01:17:57.672087  535383 command_runner.go:130] > # stream_tls_key = ""
	I0730 01:17:57.672101  535383 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0730 01:17:57.672113  535383 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0730 01:17:57.672132  535383 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0730 01:17:57.672140  535383 command_runner.go:130] > # stream_tls_ca = ""
	I0730 01:17:57.672151  535383 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0730 01:17:57.672158  535383 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0730 01:17:57.672168  535383 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0730 01:17:57.672179  535383 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0730 01:17:57.672192  535383 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0730 01:17:57.672203  535383 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0730 01:17:57.672212  535383 command_runner.go:130] > [crio.runtime]
	I0730 01:17:57.672224  535383 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0730 01:17:57.672235  535383 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0730 01:17:57.672241  535383 command_runner.go:130] > # "nofile=1024:2048"
	I0730 01:17:57.672249  535383 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0730 01:17:57.672258  535383 command_runner.go:130] > # default_ulimits = [
	I0730 01:17:57.672268  535383 command_runner.go:130] > # ]
	I0730 01:17:57.672280  535383 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0730 01:17:57.672289  535383 command_runner.go:130] > # no_pivot = false
	I0730 01:17:57.672301  535383 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0730 01:17:57.672313  535383 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0730 01:17:57.672321  535383 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0730 01:17:57.672329  535383 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0730 01:17:57.672337  535383 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0730 01:17:57.672350  535383 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0730 01:17:57.672360  535383 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0730 01:17:57.672371  535383 command_runner.go:130] > # Cgroup setting for conmon
	I0730 01:17:57.672384  535383 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0730 01:17:57.672392  535383 command_runner.go:130] > conmon_cgroup = "pod"
	I0730 01:17:57.672404  535383 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0730 01:17:57.672411  535383 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0730 01:17:57.672430  535383 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0730 01:17:57.672440  535383 command_runner.go:130] > conmon_env = [
	I0730 01:17:57.672449  535383 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0730 01:17:57.672457  535383 command_runner.go:130] > ]
	I0730 01:17:57.672469  535383 command_runner.go:130] > # Additional environment variables to set for all the
	I0730 01:17:57.672480  535383 command_runner.go:130] > # containers. These are overridden if set in the
	I0730 01:17:57.672490  535383 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0730 01:17:57.672496  535383 command_runner.go:130] > # default_env = [
	I0730 01:17:57.672500  535383 command_runner.go:130] > # ]
	I0730 01:17:57.672509  535383 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0730 01:17:57.672524  535383 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0730 01:17:57.672533  535383 command_runner.go:130] > # selinux = false
	I0730 01:17:57.672546  535383 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0730 01:17:57.672559  535383 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0730 01:17:57.672572  535383 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0730 01:17:57.672578  535383 command_runner.go:130] > # seccomp_profile = ""
	I0730 01:17:57.672584  535383 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0730 01:17:57.672596  535383 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0730 01:17:57.672609  535383 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0730 01:17:57.672620  535383 command_runner.go:130] > # which might increase security.
	I0730 01:17:57.672630  535383 command_runner.go:130] > # This option is currently deprecated,
	I0730 01:17:57.672641  535383 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0730 01:17:57.672652  535383 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0730 01:17:57.672662  535383 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0730 01:17:57.672672  535383 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0730 01:17:57.672688  535383 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0730 01:17:57.672700  535383 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0730 01:17:57.672721  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.672731  535383 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0730 01:17:57.672741  535383 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0730 01:17:57.672751  535383 command_runner.go:130] > # the cgroup blockio controller.
	I0730 01:17:57.672762  535383 command_runner.go:130] > # blockio_config_file = ""
	I0730 01:17:57.672774  535383 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0730 01:17:57.672780  535383 command_runner.go:130] > # blockio parameters.
	I0730 01:17:57.672786  535383 command_runner.go:130] > # blockio_reload = false
	I0730 01:17:57.672798  535383 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0730 01:17:57.672808  535383 command_runner.go:130] > # irqbalance daemon.
	I0730 01:17:57.672820  535383 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0730 01:17:57.672834  535383 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0730 01:17:57.672848  535383 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0730 01:17:57.672860  535383 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0730 01:17:57.672868  535383 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0730 01:17:57.672881  535383 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0730 01:17:57.672893  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.672902  535383 command_runner.go:130] > # rdt_config_file = ""
	I0730 01:17:57.672913  535383 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0730 01:17:57.672922  535383 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0730 01:17:57.672946  535383 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0730 01:17:57.672953  535383 command_runner.go:130] > # separate_pull_cgroup = ""
	I0730 01:17:57.672965  535383 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0730 01:17:57.672978  535383 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0730 01:17:57.672987  535383 command_runner.go:130] > # will be added.
	I0730 01:17:57.672997  535383 command_runner.go:130] > # default_capabilities = [
	I0730 01:17:57.673005  535383 command_runner.go:130] > # 	"CHOWN",
	I0730 01:17:57.673013  535383 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0730 01:17:57.673021  535383 command_runner.go:130] > # 	"FSETID",
	I0730 01:17:57.673029  535383 command_runner.go:130] > # 	"FOWNER",
	I0730 01:17:57.673032  535383 command_runner.go:130] > # 	"SETGID",
	I0730 01:17:57.673039  535383 command_runner.go:130] > # 	"SETUID",
	I0730 01:17:57.673045  535383 command_runner.go:130] > # 	"SETPCAP",
	I0730 01:17:57.673055  535383 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0730 01:17:57.673064  535383 command_runner.go:130] > # 	"KILL",
	I0730 01:17:57.673069  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673088  535383 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0730 01:17:57.673101  535383 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0730 01:17:57.673110  535383 command_runner.go:130] > # add_inheritable_capabilities = false
	I0730 01:17:57.673117  535383 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0730 01:17:57.673127  535383 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0730 01:17:57.673136  535383 command_runner.go:130] > default_sysctls = [
	I0730 01:17:57.673144  535383 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0730 01:17:57.673151  535383 command_runner.go:130] > ]
	I0730 01:17:57.673159  535383 command_runner.go:130] > # List of devices on the host that a
	I0730 01:17:57.673172  535383 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0730 01:17:57.673181  535383 command_runner.go:130] > # allowed_devices = [
	I0730 01:17:57.673191  535383 command_runner.go:130] > # 	"/dev/fuse",
	I0730 01:17:57.673198  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673203  535383 command_runner.go:130] > # List of additional devices. specified as
	I0730 01:17:57.673216  535383 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0730 01:17:57.673228  535383 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0730 01:17:57.673243  535383 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0730 01:17:57.673253  535383 command_runner.go:130] > # additional_devices = [
	I0730 01:17:57.673261  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673272  535383 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0730 01:17:57.673281  535383 command_runner.go:130] > # cdi_spec_dirs = [
	I0730 01:17:57.673287  535383 command_runner.go:130] > # 	"/etc/cdi",
	I0730 01:17:57.673292  535383 command_runner.go:130] > # 	"/var/run/cdi",
	I0730 01:17:57.673300  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673313  535383 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0730 01:17:57.673325  535383 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0730 01:17:57.673334  535383 command_runner.go:130] > # Defaults to false.
	I0730 01:17:57.673344  535383 command_runner.go:130] > # device_ownership_from_security_context = false
	I0730 01:17:57.673357  535383 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0730 01:17:57.673367  535383 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0730 01:17:57.673373  535383 command_runner.go:130] > # hooks_dir = [
	I0730 01:17:57.673380  535383 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0730 01:17:57.673388  535383 command_runner.go:130] > # ]
	I0730 01:17:57.673402  535383 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0730 01:17:57.673414  535383 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0730 01:17:57.673425  535383 command_runner.go:130] > # its default mounts from the following two files:
	I0730 01:17:57.673434  535383 command_runner.go:130] > #
	I0730 01:17:57.673446  535383 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0730 01:17:57.673455  535383 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0730 01:17:57.673465  535383 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0730 01:17:57.673475  535383 command_runner.go:130] > #
	I0730 01:17:57.673486  535383 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0730 01:17:57.673498  535383 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0730 01:17:57.673510  535383 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0730 01:17:57.673520  535383 command_runner.go:130] > #      only add mounts it finds in this file.
	I0730 01:17:57.673528  535383 command_runner.go:130] > #
	I0730 01:17:57.673534  535383 command_runner.go:130] > # default_mounts_file = ""
	I0730 01:17:57.673541  535383 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0730 01:17:57.673549  535383 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0730 01:17:57.673559  535383 command_runner.go:130] > pids_limit = 1024
	I0730 01:17:57.673569  535383 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0730 01:17:57.673581  535383 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0730 01:17:57.673594  535383 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0730 01:17:57.673608  535383 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0730 01:17:57.673617  535383 command_runner.go:130] > # log_size_max = -1
	I0730 01:17:57.673626  535383 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0730 01:17:57.673638  535383 command_runner.go:130] > # log_to_journald = false
	I0730 01:17:57.673651  535383 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0730 01:17:57.673662  535383 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0730 01:17:57.673672  535383 command_runner.go:130] > # Path to directory for container attach sockets.
	I0730 01:17:57.673683  535383 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0730 01:17:57.673694  535383 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0730 01:17:57.673703  535383 command_runner.go:130] > # bind_mount_prefix = ""
	I0730 01:17:57.673710  535383 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0730 01:17:57.673715  535383 command_runner.go:130] > # read_only = false
	I0730 01:17:57.673727  535383 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0730 01:17:57.673740  535383 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0730 01:17:57.673749  535383 command_runner.go:130] > # live configuration reload.
	I0730 01:17:57.673757  535383 command_runner.go:130] > # log_level = "info"
	I0730 01:17:57.673768  535383 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0730 01:17:57.673779  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.673787  535383 command_runner.go:130] > # log_filter = ""
	I0730 01:17:57.673796  535383 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0730 01:17:57.673810  535383 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0730 01:17:57.673820  535383 command_runner.go:130] > # separated by comma.
	I0730 01:17:57.673831  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.673842  535383 command_runner.go:130] > # uid_mappings = ""
	I0730 01:17:57.673854  535383 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0730 01:17:57.673866  535383 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0730 01:17:57.673875  535383 command_runner.go:130] > # separated by comma.
	I0730 01:17:57.673885  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.673893  535383 command_runner.go:130] > # gid_mappings = ""
	I0730 01:17:57.673906  535383 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0730 01:17:57.673919  535383 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0730 01:17:57.673931  535383 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0730 01:17:57.673946  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.673955  535383 command_runner.go:130] > # minimum_mappable_uid = -1
	I0730 01:17:57.673965  535383 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0730 01:17:57.673974  535383 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0730 01:17:57.673986  535383 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0730 01:17:57.674001  535383 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0730 01:17:57.674013  535383 command_runner.go:130] > # minimum_mappable_gid = -1
	I0730 01:17:57.674025  535383 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0730 01:17:57.674037  535383 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0730 01:17:57.674048  535383 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0730 01:17:57.674054  535383 command_runner.go:130] > # ctr_stop_timeout = 30
	I0730 01:17:57.674062  535383 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0730 01:17:57.674074  535383 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0730 01:17:57.674086  535383 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0730 01:17:57.674097  535383 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0730 01:17:57.674106  535383 command_runner.go:130] > drop_infra_ctr = false
	I0730 01:17:57.674116  535383 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0730 01:17:57.674127  535383 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0730 01:17:57.674138  535383 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0730 01:17:57.674146  535383 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0730 01:17:57.674157  535383 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0730 01:17:57.674170  535383 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0730 01:17:57.674181  535383 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0730 01:17:57.674192  535383 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0730 01:17:57.674202  535383 command_runner.go:130] > # shared_cpuset = ""
	I0730 01:17:57.674214  535383 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0730 01:17:57.674222  535383 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0730 01:17:57.674230  535383 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0730 01:17:57.674244  535383 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0730 01:17:57.674254  535383 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0730 01:17:57.674262  535383 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0730 01:17:57.674275  535383 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0730 01:17:57.674284  535383 command_runner.go:130] > # enable_criu_support = false
	I0730 01:17:57.674294  535383 command_runner.go:130] > # Enable/disable the generation of the container,
	I0730 01:17:57.674304  535383 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0730 01:17:57.674310  535383 command_runner.go:130] > # enable_pod_events = false
	I0730 01:17:57.674320  535383 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0730 01:17:57.674333  535383 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0730 01:17:57.674344  535383 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0730 01:17:57.674354  535383 command_runner.go:130] > # default_runtime = "runc"
	I0730 01:17:57.674365  535383 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0730 01:17:57.674379  535383 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0730 01:17:57.674392  535383 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0730 01:17:57.674405  535383 command_runner.go:130] > # creation as a file is not desired either.
	I0730 01:17:57.674421  535383 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0730 01:17:57.674432  535383 command_runner.go:130] > # the hostname is being managed dynamically.
	I0730 01:17:57.674441  535383 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0730 01:17:57.674450  535383 command_runner.go:130] > # ]
	I0730 01:17:57.674462  535383 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0730 01:17:57.674473  535383 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0730 01:17:57.674482  535383 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0730 01:17:57.674492  535383 command_runner.go:130] > # Each entry in the table should follow the format:
	I0730 01:17:57.674501  535383 command_runner.go:130] > #
	I0730 01:17:57.674512  535383 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0730 01:17:57.674522  535383 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0730 01:17:57.674559  535383 command_runner.go:130] > # runtime_type = "oci"
	I0730 01:17:57.674569  535383 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0730 01:17:57.674576  535383 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0730 01:17:57.674587  535383 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0730 01:17:57.674597  535383 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0730 01:17:57.674606  535383 command_runner.go:130] > # monitor_env = []
	I0730 01:17:57.674616  535383 command_runner.go:130] > # privileged_without_host_devices = false
	I0730 01:17:57.674626  535383 command_runner.go:130] > # allowed_annotations = []
	I0730 01:17:57.674639  535383 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0730 01:17:57.674645  535383 command_runner.go:130] > # Where:
	I0730 01:17:57.674652  535383 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0730 01:17:57.674665  535383 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0730 01:17:57.674678  535383 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0730 01:17:57.674690  535383 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0730 01:17:57.674698  535383 command_runner.go:130] > #   in $PATH.
	I0730 01:17:57.674708  535383 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0730 01:17:57.674719  535383 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0730 01:17:57.674728  535383 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0730 01:17:57.674734  535383 command_runner.go:130] > #   state.
	I0730 01:17:57.674744  535383 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0730 01:17:57.674757  535383 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0730 01:17:57.674771  535383 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0730 01:17:57.674783  535383 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0730 01:17:57.674795  535383 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0730 01:17:57.674808  535383 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0730 01:17:57.674818  535383 command_runner.go:130] > #   The currently recognized values are:
	I0730 01:17:57.674827  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0730 01:17:57.674842  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0730 01:17:57.674853  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0730 01:17:57.674866  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0730 01:17:57.674880  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0730 01:17:57.674893  535383 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0730 01:17:57.674902  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0730 01:17:57.674911  535383 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0730 01:17:57.674924  535383 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0730 01:17:57.674937  535383 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0730 01:17:57.674947  535383 command_runner.go:130] > #   deprecated option "conmon".
	I0730 01:17:57.674960  535383 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0730 01:17:57.674971  535383 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0730 01:17:57.674981  535383 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0730 01:17:57.674989  535383 command_runner.go:130] > #   should be moved to the container's cgroup
	I0730 01:17:57.674999  535383 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0730 01:17:57.675010  535383 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0730 01:17:57.675024  535383 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0730 01:17:57.675036  535383 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0730 01:17:57.675043  535383 command_runner.go:130] > #
	I0730 01:17:57.675054  535383 command_runner.go:130] > # Using the seccomp notifier feature:
	I0730 01:17:57.675061  535383 command_runner.go:130] > #
	I0730 01:17:57.675067  535383 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0730 01:17:57.675078  535383 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0730 01:17:57.675089  535383 command_runner.go:130] > #
	I0730 01:17:57.675101  535383 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0730 01:17:57.675114  535383 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0730 01:17:57.675121  535383 command_runner.go:130] > #
	I0730 01:17:57.675131  535383 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0730 01:17:57.675139  535383 command_runner.go:130] > # feature.
	I0730 01:17:57.675145  535383 command_runner.go:130] > #
	I0730 01:17:57.675154  535383 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0730 01:17:57.675162  535383 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0730 01:17:57.675175  535383 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0730 01:17:57.675190  535383 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0730 01:17:57.675203  535383 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0730 01:17:57.675211  535383 command_runner.go:130] > #
	I0730 01:17:57.675223  535383 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0730 01:17:57.675234  535383 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0730 01:17:57.675239  535383 command_runner.go:130] > #
	I0730 01:17:57.675248  535383 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0730 01:17:57.675260  535383 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0730 01:17:57.675268  535383 command_runner.go:130] > #
	I0730 01:17:57.675277  535383 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0730 01:17:57.675290  535383 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0730 01:17:57.675299  535383 command_runner.go:130] > # limitation.
	I0730 01:17:57.675309  535383 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0730 01:17:57.675317  535383 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0730 01:17:57.675324  535383 command_runner.go:130] > runtime_type = "oci"
	I0730 01:17:57.675329  535383 command_runner.go:130] > runtime_root = "/run/runc"
	I0730 01:17:57.675336  535383 command_runner.go:130] > runtime_config_path = ""
	I0730 01:17:57.675347  535383 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0730 01:17:57.675353  535383 command_runner.go:130] > monitor_cgroup = "pod"
	I0730 01:17:57.675363  535383 command_runner.go:130] > monitor_exec_cgroup = ""
	I0730 01:17:57.675373  535383 command_runner.go:130] > monitor_env = [
	I0730 01:17:57.675385  535383 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0730 01:17:57.675392  535383 command_runner.go:130] > ]
	I0730 01:17:57.675400  535383 command_runner.go:130] > privileged_without_host_devices = false
	I0730 01:17:57.675408  535383 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0730 01:17:57.675418  535383 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0730 01:17:57.675432  535383 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0730 01:17:57.675446  535383 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0730 01:17:57.675462  535383 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0730 01:17:57.675473  535383 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0730 01:17:57.675488  535383 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0730 01:17:57.675499  535383 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0730 01:17:57.675509  535383 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0730 01:17:57.675520  535383 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0730 01:17:57.675526  535383 command_runner.go:130] > # Example:
	I0730 01:17:57.675533  535383 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0730 01:17:57.675541  535383 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0730 01:17:57.675552  535383 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0730 01:17:57.675559  535383 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0730 01:17:57.675565  535383 command_runner.go:130] > # cpuset = 0
	I0730 01:17:57.675571  535383 command_runner.go:130] > # cpushares = "0-1"
	I0730 01:17:57.675574  535383 command_runner.go:130] > # Where:
	I0730 01:17:57.675578  535383 command_runner.go:130] > # The workload name is workload-type.
	I0730 01:17:57.675586  535383 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0730 01:17:57.675595  535383 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0730 01:17:57.675605  535383 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0730 01:17:57.675617  535383 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0730 01:17:57.675625  535383 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0730 01:17:57.675634  535383 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0730 01:17:57.675643  535383 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0730 01:17:57.675650  535383 command_runner.go:130] > # Default value is set to true
	I0730 01:17:57.675657  535383 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0730 01:17:57.675662  535383 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0730 01:17:57.675666  535383 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0730 01:17:57.675670  535383 command_runner.go:130] > # Default value is set to 'false'
	I0730 01:17:57.675674  535383 command_runner.go:130] > # disable_hostport_mapping = false
	I0730 01:17:57.675681  535383 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0730 01:17:57.675683  535383 command_runner.go:130] > #
	I0730 01:17:57.675691  535383 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0730 01:17:57.675701  535383 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0730 01:17:57.675714  535383 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0730 01:17:57.675726  535383 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0730 01:17:57.675738  535383 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0730 01:17:57.675746  535383 command_runner.go:130] > [crio.image]
	I0730 01:17:57.675758  535383 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0730 01:17:57.675765  535383 command_runner.go:130] > # default_transport = "docker://"
	I0730 01:17:57.675771  535383 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0730 01:17:57.675781  535383 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0730 01:17:57.675787  535383 command_runner.go:130] > # global_auth_file = ""
	I0730 01:17:57.675792  535383 command_runner.go:130] > # The image used to instantiate infra containers.
	I0730 01:17:57.675798  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.675803  535383 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0730 01:17:57.675811  535383 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0730 01:17:57.675818  535383 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0730 01:17:57.675825  535383 command_runner.go:130] > # This option supports live configuration reload.
	I0730 01:17:57.675833  535383 command_runner.go:130] > # pause_image_auth_file = ""
	I0730 01:17:57.675845  535383 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0730 01:17:57.675858  535383 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0730 01:17:57.675871  535383 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0730 01:17:57.675883  535383 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0730 01:17:57.675892  535383 command_runner.go:130] > # pause_command = "/pause"
	I0730 01:17:57.675902  535383 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0730 01:17:57.675911  535383 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0730 01:17:57.675919  535383 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0730 01:17:57.675927  535383 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0730 01:17:57.675934  535383 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0730 01:17:57.675940  535383 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0730 01:17:57.675946  535383 command_runner.go:130] > # pinned_images = [
	I0730 01:17:57.675949  535383 command_runner.go:130] > # ]
	I0730 01:17:57.675957  535383 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0730 01:17:57.675964  535383 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0730 01:17:57.675972  535383 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0730 01:17:57.675981  535383 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0730 01:17:57.675988  535383 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0730 01:17:57.675992  535383 command_runner.go:130] > # signature_policy = ""
	I0730 01:17:57.676000  535383 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0730 01:17:57.676006  535383 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0730 01:17:57.676014  535383 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0730 01:17:57.676022  535383 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0730 01:17:57.676029  535383 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0730 01:17:57.676034  535383 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0730 01:17:57.676041  535383 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0730 01:17:57.676049  535383 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0730 01:17:57.676053  535383 command_runner.go:130] > # changing them here.
	I0730 01:17:57.676062  535383 command_runner.go:130] > # insecure_registries = [
	I0730 01:17:57.676067  535383 command_runner.go:130] > # ]
	I0730 01:17:57.676078  535383 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0730 01:17:57.676089  535383 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0730 01:17:57.676092  535383 command_runner.go:130] > # image_volumes = "mkdir"
	I0730 01:17:57.676099  535383 command_runner.go:130] > # Temporary directory to use for storing big files
	I0730 01:17:57.676103  535383 command_runner.go:130] > # big_files_temporary_dir = ""
	I0730 01:17:57.676112  535383 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0730 01:17:57.676118  535383 command_runner.go:130] > # CNI plugins.
	I0730 01:17:57.676121  535383 command_runner.go:130] > [crio.network]
	I0730 01:17:57.676127  535383 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0730 01:17:57.676134  535383 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0730 01:17:57.676138  535383 command_runner.go:130] > # cni_default_network = ""
	I0730 01:17:57.676143  535383 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0730 01:17:57.676149  535383 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0730 01:17:57.676154  535383 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0730 01:17:57.676158  535383 command_runner.go:130] > # plugin_dirs = [
	I0730 01:17:57.676163  535383 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0730 01:17:57.676166  535383 command_runner.go:130] > # ]
	I0730 01:17:57.676172  535383 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0730 01:17:57.676177  535383 command_runner.go:130] > [crio.metrics]
	I0730 01:17:57.676182  535383 command_runner.go:130] > # Globally enable or disable metrics support.
	I0730 01:17:57.676188  535383 command_runner.go:130] > enable_metrics = true
	I0730 01:17:57.676192  535383 command_runner.go:130] > # Specify enabled metrics collectors.
	I0730 01:17:57.676200  535383 command_runner.go:130] > # Per default all metrics are enabled.
	I0730 01:17:57.676207  535383 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0730 01:17:57.676215  535383 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0730 01:17:57.676221  535383 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0730 01:17:57.676225  535383 command_runner.go:130] > # metrics_collectors = [
	I0730 01:17:57.676230  535383 command_runner.go:130] > # 	"operations",
	I0730 01:17:57.676235  535383 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0730 01:17:57.676241  535383 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0730 01:17:57.676245  535383 command_runner.go:130] > # 	"operations_errors",
	I0730 01:17:57.676251  535383 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0730 01:17:57.676255  535383 command_runner.go:130] > # 	"image_pulls_by_name",
	I0730 01:17:57.676261  535383 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0730 01:17:57.676265  535383 command_runner.go:130] > # 	"image_pulls_failures",
	I0730 01:17:57.676272  535383 command_runner.go:130] > # 	"image_pulls_successes",
	I0730 01:17:57.676276  535383 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0730 01:17:57.676282  535383 command_runner.go:130] > # 	"image_layer_reuse",
	I0730 01:17:57.676287  535383 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0730 01:17:57.676293  535383 command_runner.go:130] > # 	"containers_oom_total",
	I0730 01:17:57.676298  535383 command_runner.go:130] > # 	"containers_oom",
	I0730 01:17:57.676304  535383 command_runner.go:130] > # 	"processes_defunct",
	I0730 01:17:57.676308  535383 command_runner.go:130] > # 	"operations_total",
	I0730 01:17:57.676312  535383 command_runner.go:130] > # 	"operations_latency_seconds",
	I0730 01:17:57.676318  535383 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0730 01:17:57.676323  535383 command_runner.go:130] > # 	"operations_errors_total",
	I0730 01:17:57.676329  535383 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0730 01:17:57.676335  535383 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0730 01:17:57.676341  535383 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0730 01:17:57.676346  535383 command_runner.go:130] > # 	"image_pulls_success_total",
	I0730 01:17:57.676354  535383 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0730 01:17:57.676361  535383 command_runner.go:130] > # 	"containers_oom_count_total",
	I0730 01:17:57.676365  535383 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0730 01:17:57.676373  535383 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0730 01:17:57.676376  535383 command_runner.go:130] > # ]
	I0730 01:17:57.676382  535383 command_runner.go:130] > # The port on which the metrics server will listen.
	I0730 01:17:57.676386  535383 command_runner.go:130] > # metrics_port = 9090
	I0730 01:17:57.676393  535383 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0730 01:17:57.676398  535383 command_runner.go:130] > # metrics_socket = ""
	I0730 01:17:57.676405  535383 command_runner.go:130] > # The certificate for the secure metrics server.
	I0730 01:17:57.676413  535383 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0730 01:17:57.676421  535383 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0730 01:17:57.676428  535383 command_runner.go:130] > # certificate on any modification event.
	I0730 01:17:57.676432  535383 command_runner.go:130] > # metrics_cert = ""
	I0730 01:17:57.676438  535383 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0730 01:17:57.676445  535383 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0730 01:17:57.676454  535383 command_runner.go:130] > # metrics_key = ""
	I0730 01:17:57.676465  535383 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0730 01:17:57.676472  535383 command_runner.go:130] > [crio.tracing]
	I0730 01:17:57.676477  535383 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0730 01:17:57.676483  535383 command_runner.go:130] > # enable_tracing = false
	I0730 01:17:57.676488  535383 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0730 01:17:57.676495  535383 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0730 01:17:57.676504  535383 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0730 01:17:57.676511  535383 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0730 01:17:57.676515  535383 command_runner.go:130] > # CRI-O NRI configuration.
	I0730 01:17:57.676521  535383 command_runner.go:130] > [crio.nri]
	I0730 01:17:57.676525  535383 command_runner.go:130] > # Globally enable or disable NRI.
	I0730 01:17:57.676531  535383 command_runner.go:130] > # enable_nri = false
	I0730 01:17:57.676535  535383 command_runner.go:130] > # NRI socket to listen on.
	I0730 01:17:57.676542  535383 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0730 01:17:57.676546  535383 command_runner.go:130] > # NRI plugin directory to use.
	I0730 01:17:57.676551  535383 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0730 01:17:57.676558  535383 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0730 01:17:57.676563  535383 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0730 01:17:57.676571  535383 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0730 01:17:57.676577  535383 command_runner.go:130] > # nri_disable_connections = false
	I0730 01:17:57.676582  535383 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0730 01:17:57.676589  535383 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0730 01:17:57.676593  535383 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0730 01:17:57.676600  535383 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0730 01:17:57.676606  535383 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0730 01:17:57.676611  535383 command_runner.go:130] > [crio.stats]
	I0730 01:17:57.676620  535383 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0730 01:17:57.676628  535383 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0730 01:17:57.676632  535383 command_runner.go:130] > # stats_collection_period = 0
	I0730 01:17:57.676766  535383 cni.go:84] Creating CNI manager for ""
	I0730 01:17:57.676777  535383 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 01:17:57.676785  535383 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 01:17:57.676808  535383 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-543365 NodeName:multinode-543365 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 01:17:57.676950  535383 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-543365"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 01:17:57.677029  535383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 01:17:57.686783  535383 command_runner.go:130] > kubeadm
	I0730 01:17:57.686802  535383 command_runner.go:130] > kubectl
	I0730 01:17:57.686808  535383 command_runner.go:130] > kubelet
	I0730 01:17:57.686824  535383 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 01:17:57.686885  535383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 01:17:57.696189  535383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0730 01:17:57.712541  535383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 01:17:57.728618  535383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0730 01:17:57.744676  535383 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0730 01:17:57.748140  535383 command_runner.go:130] > 192.168.39.235	control-plane.minikube.internal
	I0730 01:17:57.748266  535383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:17:57.877568  535383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 01:17:57.892734  535383 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365 for IP: 192.168.39.235
	I0730 01:17:57.892762  535383 certs.go:194] generating shared ca certs ...
	I0730 01:17:57.892783  535383 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:17:57.892972  535383 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 01:17:57.893017  535383 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 01:17:57.893028  535383 certs.go:256] generating profile certs ...
	I0730 01:17:57.893105  535383 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/client.key
	I0730 01:17:57.893157  535383 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.key.a9fe4432
	I0730 01:17:57.893191  535383 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.key
	I0730 01:17:57.893202  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 01:17:57.893214  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 01:17:57.893223  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 01:17:57.893236  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 01:17:57.893248  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 01:17:57.893263  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 01:17:57.893275  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 01:17:57.893288  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 01:17:57.893357  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 01:17:57.893385  535383 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 01:17:57.893395  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 01:17:57.893420  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 01:17:57.893444  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 01:17:57.893465  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 01:17:57.893503  535383 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:17:57.893530  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem -> /usr/share/ca-certificates/502384.pem
	I0730 01:17:57.893543  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> /usr/share/ca-certificates/5023842.pem
	I0730 01:17:57.893556  535383 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:57.894194  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 01:17:57.916198  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 01:17:57.938321  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 01:17:57.960108  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 01:17:57.981855  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0730 01:17:58.004475  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 01:17:58.026459  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 01:17:58.048180  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/multinode-543365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0730 01:17:58.070122  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 01:17:58.091655  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 01:17:58.113952  535383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 01:17:58.136529  535383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 01:17:58.152945  535383 ssh_runner.go:195] Run: openssl version
	I0730 01:17:58.158550  535383 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0730 01:17:58.158625  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 01:17:58.168627  535383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.172630  535383 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.172758  535383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.172803  535383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 01:17:58.177789  535383 command_runner.go:130] > 3ec20f2e
	I0730 01:17:58.178012  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 01:17:58.186612  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 01:17:58.196595  535383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.200620  535383 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.200668  535383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.200725  535383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:17:58.205720  535383 command_runner.go:130] > b5213941
	I0730 01:17:58.205805  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 01:17:58.214199  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 01:17:58.224332  535383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.246721  535383 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.246913  535383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.246982  535383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 01:17:58.253796  535383 command_runner.go:130] > 51391683
	I0730 01:17:58.253895  535383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 01:17:58.287608  535383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:17:58.298821  535383 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:17:58.298853  535383 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0730 01:17:58.298859  535383 command_runner.go:130] > Device: 253,1	Inode: 6292011     Links: 1
	I0730 01:17:58.298866  535383 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0730 01:17:58.298872  535383 command_runner.go:130] > Access: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298881  535383 command_runner.go:130] > Modify: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298886  535383 command_runner.go:130] > Change: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298891  535383 command_runner.go:130] >  Birth: 2024-07-30 01:11:07.593805320 +0000
	I0730 01:17:58.298957  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 01:17:58.306173  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.306333  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 01:17:58.314432  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.314517  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 01:17:58.325378  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.327747  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 01:17:58.334440  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.334522  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 01:17:58.340753  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.340938  535383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 01:17:58.346103  535383 command_runner.go:130] > Certificate will not expire
	I0730 01:17:58.346192  535383 kubeadm.go:392] StartCluster: {Name:multinode-543365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-543365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:17:58.346406  535383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 01:17:58.346504  535383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 01:17:58.404129  535383 command_runner.go:130] > 8612bcafcf544a74843e25073d4a30f22fcd4568b61f9e31feffa3f1ab4a2e10
	I0730 01:17:58.404165  535383 command_runner.go:130] > 6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea
	I0730 01:17:58.404176  535383 command_runner.go:130] > 14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b
	I0730 01:17:58.404185  535383 command_runner.go:130] > e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068
	I0730 01:17:58.404194  535383 command_runner.go:130] > 0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5
	I0730 01:17:58.404202  535383 command_runner.go:130] > 0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce
	I0730 01:17:58.404210  535383 command_runner.go:130] > 1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3
	I0730 01:17:58.404221  535383 command_runner.go:130] > c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08
	I0730 01:17:58.404253  535383 cri.go:89] found id: "8612bcafcf544a74843e25073d4a30f22fcd4568b61f9e31feffa3f1ab4a2e10"
	I0730 01:17:58.404263  535383 cri.go:89] found id: "6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea"
	I0730 01:17:58.404270  535383 cri.go:89] found id: "14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b"
	I0730 01:17:58.404275  535383 cri.go:89] found id: "e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068"
	I0730 01:17:58.404280  535383 cri.go:89] found id: "0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5"
	I0730 01:17:58.404286  535383 cri.go:89] found id: "0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce"
	I0730 01:17:58.404290  535383 cri.go:89] found id: "1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3"
	I0730 01:17:58.404295  535383 cri.go:89] found id: "c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08"
	I0730 01:17:58.404310  535383 cri.go:89] found id: ""
	I0730 01:17:58.404372  535383 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.142296746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afbdd18a-fad5-4c92-836b-2931cea45665 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.143388758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d926175-75cc-4078-b6b7-e0f2a582dee7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.143801902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302530143780099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d926175-75cc-4078-b6b7-e0f2a582dee7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.144312236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33060e11-e91a-42ef-ba4b-241651825891 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.144370056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33060e11-e91a-42ef-ba4b-241651825891 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.144686782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722302278426847371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073faf6c75a5cad115463e7508fafe76c793eed97435d89a30f6e7bfcbb529b8,PodSandboxId:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722301958743444567,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea,PodSandboxId:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301906730248851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca9212e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b,PodSandboxId:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722301895185368837,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068,PodSandboxId:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722301891672430680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5,PodSandboxId:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722301871731927891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76
,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce,PodSandboxId:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722301871708454594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3,PodSandboxId:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722301871704339305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08,PodSandboxId:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722301871625577137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map
[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33060e11-e91a-42ef-ba4b-241651825891 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.185107723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41ef6e45-e02f-4d46-9848-e0efbdf78a4f name=/runtime.v1.RuntimeService/Version
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.185202044Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41ef6e45-e02f-4d46-9848-e0efbdf78a4f name=/runtime.v1.RuntimeService/Version
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.186062228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d022f1b7-2f90-4e21-b46b-e415a09a437a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.186476259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302530186454861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d022f1b7-2f90-4e21-b46b-e415a09a437a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.186979865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58bceecc-2be7-4543-9d3c-c1c60156243a name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.187051517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58bceecc-2be7-4543-9d3c-c1c60156243a name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.187414685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722302278426847371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073faf6c75a5cad115463e7508fafe76c793eed97435d89a30f6e7bfcbb529b8,PodSandboxId:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722301958743444567,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea,PodSandboxId:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301906730248851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca9212e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b,PodSandboxId:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722301895185368837,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068,PodSandboxId:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722301891672430680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5,PodSandboxId:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722301871731927891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76
,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce,PodSandboxId:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722301871708454594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3,PodSandboxId:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722301871704339305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08,PodSandboxId:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722301871625577137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map
[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58bceecc-2be7-4543-9d3c-c1c60156243a name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.231523626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1005158-5d0b-4d17-b5c1-1e3b98fa6638 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.231609875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1005158-5d0b-4d17-b5c1-1e3b98fa6638 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.233676833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8920004c-708d-4d6f-bff5-5fac1f7ee0a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.234144533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302530234121521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8920004c-708d-4d6f-bff5-5fac1f7ee0a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.234681964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83310577-e317-4b65-82b1-36a83c9af2f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.234738953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83310577-e317-4b65-82b1-36a83c9af2f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.235178266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722302278426847371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073faf6c75a5cad115463e7508fafe76c793eed97435d89a30f6e7bfcbb529b8,PodSandboxId:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722301958743444567,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea,PodSandboxId:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301906730248851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca9212e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b,PodSandboxId:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722301895185368837,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068,PodSandboxId:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722301891672430680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5,PodSandboxId:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722301871731927891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76
,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce,PodSandboxId:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722301871708454594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3,PodSandboxId:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722301871704339305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08,PodSandboxId:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722301871625577137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map
[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83310577-e317-4b65-82b1-36a83c9af2f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.261487864Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=188b4aed-aa83-4a0b-a560-6258e7b8444f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.261918033Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-t9w48,Uid:6e9e683c-04d9-456c-a7d5-206e09d00256,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302317995199154,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:18:11.234261553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&PodSandboxMetadata{Name:kube-proxy-kknjc,Uid:e340baed-b0bc-417f-a3c8-2739cfdc97c4,Namespace:kube-system,Attempt:1,},State:S
ANDBOX_READY,CreatedAt:1722302284246698981,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:30.375318803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-543365,Uid:3e7bbbb2b9fff26b5f93da0f692e3a38,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284243297459,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,tier: control-plane,},Annotations:map[string]str
ing{kubernetes.io/config.hash: 3e7bbbb2b9fff26b5f93da0f692e3a38,kubernetes.io/config.seen: 2024-07-30T01:11:16.791714966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&PodSandboxMetadata{Name:etcd-multinode-543365,Uid:cca80933eb38ca6746fd9fbe9232fa76,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284242224105,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.235:2379,kubernetes.io/config.hash: cca80933eb38ca6746fd9fbe9232fa76,kubernetes.io/config.seen: 2024-07-30T01:11:16.791716103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbb
d5b769ec,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a772f2dd-f657-4eb2-9c29-f612c46c1e6e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284242182405,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\
":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-30T01:11:46.285951132Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-543365,Uid:44a2facdcdb5be9b2ea24038d2e5e2c1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284239551984,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.235:8443,kubernetes.io/config.hash: 44a2facdcdb5be9b2ea24038d2e5e2c1,kubernetes.io/conf
ig.seen: 2024-07-30T01:11:16.791708695Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-543365,Uid:0c8a5915b3d273b95feb8931e355b638,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284230041644,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c8a5915b3d273b95feb8931e355b638,kubernetes.io/config.seen: 2024-07-30T01:11:16.791713691Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&PodSandboxMetadata{Name:kindnet-nhqxm,Uid:60c95f91-4cb1-4f07-a34c-bed380318903,Names
pace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302284225795118,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:30.335065133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4lxcw,Uid:1498a653-557e-46df-84a2-a58156bebfe7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722302278256706302,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,k8s-app: kube-dns,pod-temp
late-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:46.290124206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-t9w48,Uid:6e9e683c-04d9-456c-a7d5-206e09d00256,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722301956012342743,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:12:35.704762065Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a772f2dd-f657-4eb2-9c29-f612c46c1e6e,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1722301906594091090,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-30T01:11:46.285951132Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&PodSandboxMetadata{Name:kube-proxy-kknjc,Uid:e340baed-b0bc-417f-a3c8-2739cfdc97c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722301891581412437,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:30.375318803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&PodSandboxMetadata{Name:kindnet-nhqxm,Uid:60c95f91-4cb1-4f07-a34c-bed38031890
3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722301891541312262,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:11:30.335065133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-543365,Uid:44a2facdcdb5be9b2ea24038d2e5e2c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722301871494475953,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be
9b2ea24038d2e5e2c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.235:8443,kubernetes.io/config.hash: 44a2facdcdb5be9b2ea24038d2e5e2c1,kubernetes.io/config.seen: 2024-07-30T01:11:11.037531563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-543365,Uid:0c8a5915b3d273b95feb8931e355b638,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722301871493419639,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0c8a5915b3d273b95feb8931e355b638,kubernetes.io/config.seen: 2024-07-30T01:11:11.03
7532707Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&PodSandboxMetadata{Name:etcd-multinode-543365,Uid:cca80933eb38ca6746fd9fbe9232fa76,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722301871477315982,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.235:2379,kubernetes.io/config.hash: cca80933eb38ca6746fd9fbe9232fa76,kubernetes.io/config.seen: 2024-07-30T01:11:11.037527506Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-543365,Uid:3e7bbbb2b9fff26b5f93da0f692e3a3
8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722301871474862032,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e7bbbb2b9fff26b5f93da0f692e3a38,kubernetes.io/config.seen: 2024-07-30T01:11:11.037533558Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=188b4aed-aa83-4a0b-a560-6258e7b8444f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.262708202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=caaba160-ddfe-4e0d-8d85-89259949e95d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.262808208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=caaba160-ddfe-4e0d-8d85-89259949e95d name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:22:10 multinode-543365 crio[2853]: time="2024-07-30 01:22:10.263195563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b27c24cd05b05107a462e1ccb3897ab9ab3ae78491b94d256b8688e6eab8fb38,PodSandboxId:8b8f426addeefc009186b0dace2571ec68460a9194653a34d94dc74c1eff849a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722302318114352498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722302291553937440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c,PodSandboxId:3e9cd17ba880b33a5140e32d6324548624e7f1b92e414e9ee0c5c7e4fed1a79a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722302284747678925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340baed-b0bc-417f-a3c8-2739
cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902,PodSandboxId:a9843a16da32fff4fbd512ce08e9eebd1ce544841305c78417fae108e3f586db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722302284689500794,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d,PodSandboxId:3620e80db1dedaab154343b792162eec23bdf5adedac93f0bbc1c9b2eaa6316b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722302284583588700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:map[string]string{io.kuber
netes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5998f46e07386c75d388adfee8a8c25bd20d88325a788af3ba21e7f3003b872f,PodSandboxId:6974746fb4d9827bf7dce45e1b8ffcf5729adc23d2ac8b029058adbbd5b769ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302284716085058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9ca9212e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb,PodSandboxId:50e664b75bda84fc7919caa04cff9ec2b5aa040d189ae77f5c512bf5096068e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722302284545788472,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42,PodSandboxId:0d53fd9b06f6996dea6672b800245779531da32c8f96805da78f3f6a919542a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722302284517298362,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea,PodSandboxId:219ed7a1f7971e82bfd8fec8cef35932a021cd8442e36565c6d3ed6d694fa3bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722302284427188921,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887,PodSandboxId:a273cf71b99491dd73ed02dbc710ba04ac24699e2267e6ed4348187d73ac3e4b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722302278426847371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4lxcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1498a653-557e-46df-84a2-a58156bebfe7,},Annotations:map[string]string{io.kubernetes.container.hash: fbd267f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073faf6c75a5cad115463e7508fafe76c793eed97435d89a30f6e7bfcbb529b8,PodSandboxId:f9e4f60f2d5f8924bdf8dcab6dcf380ff13ae865e38c099ea4a3062629c23e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722301958743444567,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t9w48,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e9e683c-04d9-456c-a7d5-206e09d00256,},Annotations:map[string]string{io.kubernetes.container.hash: 807fee3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6aa7eb02bfb7f3cbafb6492d5d5986dd101afd4bdb969d4e29d3e122b28ba6ea,PodSandboxId:647b456b957b35f6c66b985fbb1d700f665da5c31c9bcfd3b10b29490b675aeb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722301906730248851,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a772f2dd-f657-4eb2-9c29-f612c46c1e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca9212e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b,PodSandboxId:d3fcad9a0e600f054b923cd34c3df17224211a0e0b044fe036c15c24b0163d7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722301895185368837,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nhqxm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 60c95f91-4cb1-4f07-a34c-bed380318903,},Annotations:map[string]string{io.kubernetes.container.hash: 3e46084,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068,PodSandboxId:2ffcd2964fc06d698d7a51df6e09d1488b09dee7b676cb58322413eb10f80a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722301891672430680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kknjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e340baed-b0bc-417f-a3c8-2739cfdc97c4,},Annotations:map[string]string{io.kubernetes.container.hash: 39f3db51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5,PodSandboxId:bb0be5746b3e429e1efa5f7d85900ef2dc2ab841f0c81276101016204ee306c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722301871731927891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca80933eb38ca6746fd9fbe9232fa76
,},Annotations:map[string]string{io.kubernetes.container.hash: dcd7552c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce,PodSandboxId:8a331ee826a466ca2a92a71f17cca64aabd048d3d7c0897beaa5642ca196984a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722301871708454594,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7bbbb2b9fff26b5f93da0f692e3a38,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3,PodSandboxId:b0b923422344e02661c9849f9a733af633cf5b30bef90a3a883fd85743a1be4c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722301871704339305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8a5915b3d273b95feb8931e355b638,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08,PodSandboxId:449335c90087a8fa0cea9dfdfa5a464478c1f1fdf8342bd1c33c3303078eca7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722301871625577137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-543365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44a2facdcdb5be9b2ea24038d2e5e2c1,},Annotations:map
[string]string{io.kubernetes.container.hash: c268361b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=caaba160-ddfe-4e0d-8d85-89259949e95d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b27c24cd05b05       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   8b8f426addeef       busybox-fc5497c4f-t9w48
	552636b457ec5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   a273cf71b9949       coredns-7db6d8ff4d-4lxcw
	7782ced092804       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   3e9cd17ba880b       kube-proxy-kknjc
	5998f46e07386       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   6974746fb4d98       storage-provisioner
	2d47482944552       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   a9843a16da32f       kindnet-nhqxm
	2308013e18c51       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   3620e80db1ded       kube-scheduler-multinode-543365
	ee4c048a4833a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   50e664b75bda8       etcd-multinode-543365
	06b51cae1ca69       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   0d53fd9b06f69       kube-controller-manager-multinode-543365
	6359129ac77b1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   219ed7a1f7971       kube-apiserver-multinode-543365
	302e6de0ed6c4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   a273cf71b9949       coredns-7db6d8ff4d-4lxcw
	073faf6c75a5c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   f9e4f60f2d5f8       busybox-fc5497c4f-t9w48
	6aa7eb02bfb7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   647b456b957b3       storage-provisioner
	14e9c0b67555e       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   d3fcad9a0e600       kindnet-nhqxm
	e5c812257815d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   2ffcd2964fc06       kube-proxy-kknjc
	0c315fbcec823       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   bb0be5746b3e4       etcd-multinode-543365
	0f8bdfa3ecd41       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   8a331ee826a46       kube-scheduler-multinode-543365
	1a7e2b10c6248       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   b0b923422344e       kube-controller-manager-multinode-543365
	c06510d11072b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   449335c90087a       kube-apiserver-multinode-543365
	
	
	==> coredns [302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59384 - 59809 "HINFO IN 124160274080010694.8473823160395440456. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009936685s
	
	
	==> coredns [552636b457ec580b3683a0b35047ff08613485ba3b62bbc01d99988a7ff0cfe7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45898 - 49885 "HINFO IN 387166435644859509.2428656921758690141. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009760093s
	
	
	==> describe nodes <==
	Name:               multinode-543365
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-543365
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=multinode-543365
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T01_11_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 01:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-543365
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:22:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:18:10 +0000   Tue, 30 Jul 2024 01:11:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    multinode-543365
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8beae51f009544c29d620bade862ba87
	  System UUID:                8beae51f-0095-44c2-9d62-0bade862ba87
	  Boot ID:                    e722d9d3-6cdf-4e6f-87f3-5bc6618d6fde
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t9w48                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                 coredns-7db6d8ff4d-4lxcw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-543365                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-nhqxm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-543365             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-543365    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-kknjc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-543365             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 10m    kube-proxy       
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node multinode-543365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node multinode-543365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node multinode-543365 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m    node-controller  Node multinode-543365 event: Registered Node multinode-543365 in Controller
	  Normal  NodeReady                10m    kubelet          Node multinode-543365 status is now: NodeReady
	  Normal  Starting                 4m     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m     kubelet          Node multinode-543365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m     kubelet          Node multinode-543365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m     kubelet          Node multinode-543365 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s  node-controller  Node multinode-543365 event: Registered Node multinode-543365 in Controller
	
	
	Name:               multinode-543365-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-543365-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=multinode-543365
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T01_18_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 01:18:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-543365-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:19:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 30 Jul 2024 01:19:18 +0000   Tue, 30 Jul 2024 01:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-543365-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 00aa7d560b51467a8ea0a1e18e9bb185
	  System UUID:                00aa7d56-0b51-467a-8ea0-a1e18e9bb185
	  Boot ID:                    11986d1f-e015-4837-a51d-871e1666745b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qq57b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kindnet-kbsgw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m56s
	  kube-system                 kube-proxy-xpm28           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m56s (x2 over 9m56s)  kubelet          Node multinode-543365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x2 over 9m56s)  kubelet          Node multinode-543365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x2 over 9m56s)  kubelet          Node multinode-543365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m37s                  kubelet          Node multinode-543365-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-543365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-543365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-543365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-543365-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-543365-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.045817] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.157353] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.147732] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.280147] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +3.954713] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.626878] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.056235] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.974926] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.080176] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.619588] systemd-fstab-generator[1458]: Ignoring "noauto" option for root device
	[  +0.127447] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.506502] kauditd_printk_skb: 56 callbacks suppressed
	[Jul30 01:12] kauditd_printk_skb: 14 callbacks suppressed
	[Jul30 01:17] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.164369] systemd-fstab-generator[2783]: Ignoring "noauto" option for root device
	[  +0.162604] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.130906] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.260143] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +3.692329] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[Jul30 01:18] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.674066] systemd-fstab-generator[3798]: Ignoring "noauto" option for root device
	[  +0.090969] kauditd_printk_skb: 62 callbacks suppressed
	[ +11.259037] kauditd_printk_skb: 19 callbacks suppressed
	[  +2.212095] systemd-fstab-generator[3974]: Ignoring "noauto" option for root device
	[ +14.481074] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0c315fbcec823d267905ff207d44bd8ff40452a3e61ebb1f1a0cf78f728dd1a5] <==
	{"level":"info","ts":"2024-07-30T01:12:14.370556Z","caller":"traceutil/trace.go:171","msg":"trace[1437787926] range","detail":"{range_begin:/registry/minions/multinode-543365-m02; range_end:; response_count:0; response_revision:443; }","duration":"233.93309ms","start":"2024-07-30T01:12:14.13657Z","end":"2024-07-30T01:12:14.370503Z","steps":["trace[1437787926] 'agreement among raft nodes before linearized reading'  (duration: 232.162451ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:14.369606Z","caller":"traceutil/trace.go:171","msg":"trace[264826785] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"179.928507ms","start":"2024-07-30T01:12:14.18967Z","end":"2024-07-30T01:12:14.369598Z","steps":["trace[264826785] 'process raft request'  (duration: 179.696656ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T01:12:22.280131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.277606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-kbsgw\" ","response":"range_response_count:1 size:4929"}
	{"level":"info","ts":"2024-07-30T01:12:22.28025Z","caller":"traceutil/trace.go:171","msg":"trace[1052457535] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-kbsgw; range_end:; response_count:1; response_revision:487; }","duration":"106.409544ms","start":"2024-07-30T01:12:22.173808Z","end":"2024-07-30T01:12:22.280218Z","steps":["trace[1052457535] 'range keys from in-memory index tree'  (duration: 106.17213ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:22.446022Z","caller":"traceutil/trace.go:171","msg":"trace[1404368536] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"103.291558ms","start":"2024-07-30T01:12:22.342715Z","end":"2024-07-30T01:12:22.446007Z","steps":["trace[1404368536] 'read index received'  (duration: 103.062394ms)","trace[1404368536] 'applied index is now lower than readState.Index'  (duration: 228.542µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T01:12:22.44621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.456406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T01:12:22.446788Z","caller":"traceutil/trace.go:171","msg":"trace[2054337537] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:488; }","duration":"104.083102ms","start":"2024-07-30T01:12:22.342691Z","end":"2024-07-30T01:12:22.446775Z","steps":["trace[2054337537] 'agreement among raft nodes before linearized reading'  (duration: 103.462358ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:22.446264Z","caller":"traceutil/trace.go:171","msg":"trace[603993300] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"159.168817ms","start":"2024-07-30T01:12:22.287071Z","end":"2024-07-30T01:12:22.44624Z","steps":["trace[603993300] 'process raft request'  (duration: 158.776558ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:12:22.667479Z","caller":"traceutil/trace.go:171","msg":"trace[606038577] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"215.218292ms","start":"2024-07-30T01:12:22.452236Z","end":"2024-07-30T01:12:22.667455Z","steps":["trace[606038577] 'process raft request'  (duration: 149.893449ms)","trace[606038577] 'compare'  (duration: 64.983009ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T01:13:06.023952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.046533ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15688448736247668784 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-543365-m03.17e6d887e9e6ec82\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-543365-m03.17e6d887e9e6ec82\" value_size:646 lease:6465076699392892621 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-30T01:13:06.024198Z","caller":"traceutil/trace.go:171","msg":"trace[1750628680] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"232.738495ms","start":"2024-07-30T01:13:05.791445Z","end":"2024-07-30T01:13:06.024183Z","steps":["trace[1750628680] 'process raft request'  (duration: 89.821428ms)","trace[1750628680] 'compare'  (duration: 141.92676ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-30T01:13:06.02427Z","caller":"traceutil/trace.go:171","msg":"trace[458488859] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"146.30831ms","start":"2024-07-30T01:13:05.87795Z","end":"2024-07-30T01:13:06.024258Z","steps":["trace[458488859] 'process raft request'  (duration: 146.090823ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:13:06.024477Z","caller":"traceutil/trace.go:171","msg":"trace[338737904] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"207.159985ms","start":"2024-07-30T01:13:05.817307Z","end":"2024-07-30T01:13:06.024467Z","steps":["trace[338737904] 'read index received'  (duration: 63.966971ms)","trace[338737904] 'applied index is now lower than readState.Index'  (duration: 143.192262ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T01:13:06.024585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.282275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.235\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-30T01:13:06.027627Z","caller":"traceutil/trace.go:171","msg":"trace[1592212921] range","detail":"{range_begin:/registry/masterleases/192.168.39.235; range_end:; response_count:1; response_revision:574; }","duration":"210.342957ms","start":"2024-07-30T01:13:05.817267Z","end":"2024-07-30T01:13:06.02761Z","steps":["trace[1592212921] 'agreement among raft nodes before linearized reading'  (duration: 207.235991ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:16:22.133641Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-30T01:16:22.133758Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-543365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"]}
	{"level":"warn","ts":"2024-07-30T01:16:22.133862Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T01:16:22.134238Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T01:16:22.183709Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.235:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-30T01:16:22.183805Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.235:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-30T01:16:22.185357Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"feb6ae41040cd9b8","current-leader-member-id":"feb6ae41040cd9b8"}
	{"level":"info","ts":"2024-07-30T01:16:22.18848Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:16:22.188601Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:16:22.188612Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-543365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"]}
	
	
	==> etcd [ee4c048a4833a4bccdfa1db706f3f58f6f733f64a5d761f62799116b4f71f6eb] <==
	{"level":"info","ts":"2024-07-30T01:18:05.193359Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-30T01:18:05.193369Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-30T01:18:05.193606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 switched to configuration voters=(18354048925659093432)"}
	{"level":"info","ts":"2024-07-30T01:18:05.193674Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","added-peer-id":"feb6ae41040cd9b8","added-peer-peer-urls":["https://192.168.39.235:2380"]}
	{"level":"info","ts":"2024-07-30T01:18:05.193802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T01:18:05.193845Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T01:18:05.198101Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-30T01:18:05.19841Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"feb6ae41040cd9b8","initial-advertise-peer-urls":["https://192.168.39.235:2380"],"listen-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-30T01:18:05.199958Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-30T01:18:05.200182Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:18:05.20392Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-07-30T01:18:06.726953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-30T01:18:06.727011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-30T01:18:06.727048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgPreVoteResp from feb6ae41040cd9b8 at term 2"}
	{"level":"info","ts":"2024-07-30T01:18:06.727063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became candidate at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.727069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgVoteResp from feb6ae41040cd9b8 at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.727088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became leader at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.727097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: feb6ae41040cd9b8 elected leader feb6ae41040cd9b8 at term 3"}
	{"level":"info","ts":"2024-07-30T01:18:06.732493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"feb6ae41040cd9b8","local-member-attributes":"{Name:multinode-543365 ClientURLs:[https://192.168.39.235:2379]}","request-path":"/0/members/feb6ae41040cd9b8/attributes","cluster-id":"1b3c53dd134e6187","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T01:18:06.732487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:18:06.732991Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:18:06.733526Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T01:18:06.733617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T01:18:06.734797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.235:2379"}
	{"level":"info","ts":"2024-07-30T01:18:06.736504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:22:10 up 11 min,  0 users,  load average: 0.03, 0.10, 0.06
	Linux multinode-543365 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14e9c0b67555eb5b74ee1c022e6ad2001b37372b2a8ed8cf3b7e1dd0272bcb1b] <==
	I0730 01:15:36.123530       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:15:46.123353       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:15:46.123611       1 main.go:299] handling current node
	I0730 01:15:46.123684       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:15:46.123704       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:15:46.123999       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:15:46.124033       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:15:56.126770       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:15:56.126934       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:15:56.127124       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:15:56.127160       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:15:56.127237       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:15:56.127257       1 main.go:299] handling current node
	I0730 01:16:06.123982       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:16:06.124030       1 main.go:299] handling current node
	I0730 01:16:06.124063       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:16:06.124069       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:16:06.124231       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:16:06.124254       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:16:16.126105       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0730 01:16:16.126189       1 main.go:322] Node multinode-543365-m03 has CIDR [10.244.3.0/24] 
	I0730 01:16:16.126382       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:16:16.126404       1 main.go:299] handling current node
	I0730 01:16:16.126420       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:16:16.126425       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [2d47482944552054ee01305c4800862d274b69a99f76677c25ca2c9b3d0a7902] <==
	I0730 01:21:05.624276       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:21:15.627229       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:21:15.627339       1 main.go:299] handling current node
	I0730 01:21:15.627372       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:21:15.627391       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:21:25.629464       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:21:25.629533       1 main.go:299] handling current node
	I0730 01:21:25.629555       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:21:25.629561       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:21:35.627456       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:21:35.627591       1 main.go:299] handling current node
	I0730 01:21:35.627620       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:21:35.627637       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:21:45.626585       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:21:45.626673       1 main.go:299] handling current node
	I0730 01:21:45.626730       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:21:45.626736       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:21:55.629022       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:21:55.629156       1 main.go:299] handling current node
	I0730 01:21:55.629193       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:21:55.629212       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:22:05.624060       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0730 01:22:05.624216       1 main.go:322] Node multinode-543365-m02 has CIDR [10.244.1.0/24] 
	I0730 01:22:05.624368       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0730 01:22:05.624393       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6359129ac77b10507040db60628cb17af2dc818f1e1d5f8ffd626863a10b4aea] <==
	I0730 01:18:08.034336       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 01:18:08.039734       1 aggregator.go:165] initial CRD sync complete...
	I0730 01:18:08.040001       1 autoregister_controller.go:141] Starting autoregister controller
	I0730 01:18:08.040099       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 01:18:08.093109       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 01:18:08.101771       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 01:18:08.101938       1 policy_source.go:224] refreshing policies
	I0730 01:18:08.109205       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 01:18:08.109241       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 01:18:08.109819       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 01:18:08.114961       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 01:18:08.115015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 01:18:08.115021       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 01:18:08.115709       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0730 01:18:08.119651       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0730 01:18:08.132985       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0730 01:18:08.142286       1 cache.go:39] Caches are synced for autoregister controller
	I0730 01:18:08.918641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0730 01:18:10.803021       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0730 01:18:10.911681       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0730 01:18:10.921819       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0730 01:18:10.985132       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0730 01:18:10.991068       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0730 01:18:21.247514       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 01:18:21.348155       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c06510d11072bdda7e330e0f30629cf04ea5dd7c638d7396e447cf02b69b1e08] <==
	W0730 01:16:22.151636       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.151685       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.151741       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.151793       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.156572       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.156816       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157178       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157253       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157308       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157360       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157414       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157478       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157537       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157596       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157658       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157704       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157753       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157809       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.157860       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158140       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158524       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158586       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158651       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158801       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:16:22.158864       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [06b51cae1ca6928a553c852f4659127a4eca2cee3abd6eace706de8f27d81a42] <==
	I0730 01:18:47.784324       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m02\" does not exist"
	I0730 01:18:47.791577       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m02" podCIDRs=["10.244.1.0/24"]
	I0730 01:18:48.699598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.714µs"
	I0730 01:18:48.711757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.413µs"
	I0730 01:18:48.723852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.433µs"
	I0730 01:18:48.741709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.709µs"
	I0730 01:18:48.750605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.737µs"
	I0730 01:18:48.754040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.744µs"
	I0730 01:19:06.278610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:19:06.298916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.551µs"
	I0730 01:19:06.312454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.171µs"
	I0730 01:19:10.334560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.288858ms"
	I0730 01:19:10.334667       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.632µs"
	I0730 01:19:24.194268       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:19:25.484543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:19:25.484854       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m03\" does not exist"
	I0730 01:19:25.513055       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m03" podCIDRs=["10.244.2.0/24"]
	I0730 01:19:43.526580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:19:48.819059       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:20:31.314640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.496312ms"
	I0730 01:20:31.314997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.921µs"
	I0730 01:21:01.132053       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2qw48"
	I0730 01:21:01.153025       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2qw48"
	I0730 01:21:01.153061       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-srwdc"
	I0730 01:21:01.175623       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-srwdc"
	
	
	==> kube-controller-manager [1a7e2b10c62484c6e810554cf470f474fea21464bbe54ed080a2c697853333b3] <==
	I0730 01:12:14.373075       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m02\" does not exist"
	I0730 01:12:14.386123       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m02" podCIDRs=["10.244.1.0/24"]
	I0730 01:12:14.549257       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-543365-m02"
	I0730 01:12:33.137362       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:12:35.706814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.858206ms"
	I0730 01:12:35.727179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.228029ms"
	I0730 01:12:35.740025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.798703ms"
	I0730 01:12:35.740227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.066µs"
	I0730 01:12:39.185072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.656252ms"
	I0730 01:12:39.186126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.755µs"
	I0730 01:12:39.233120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.752452ms"
	I0730 01:12:39.235511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.774µs"
	I0730 01:13:06.026136       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m03\" does not exist"
	I0730 01:13:06.026715       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:13:06.040561       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m03" podCIDRs=["10.244.2.0/24"]
	I0730 01:13:09.571701       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-543365-m03"
	I0730 01:13:26.768602       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m03"
	I0730 01:13:55.487715       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:13:56.426523       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-543365-m03\" does not exist"
	I0730 01:13:56.427354       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:13:56.443231       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-543365-m03" podCIDRs=["10.244.3.0/24"]
	I0730 01:14:15.886041       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m03"
	I0730 01:14:59.625709       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-543365-m02"
	I0730 01:14:59.698098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.451409ms"
	I0730 01:14:59.698214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.178µs"
	
	
	==> kube-proxy [7782ced0928040b4fc6dbb64d9febfa962cfee8fe67ddc966ea1f876283d963c] <==
	I0730 01:18:05.576355       1 server_linux.go:69] "Using iptables proxy"
	I0730 01:18:08.092518       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.235"]
	I0730 01:18:08.153927       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 01:18:08.153990       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 01:18:08.154012       1 server_linux.go:165] "Using iptables Proxier"
	I0730 01:18:08.156432       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 01:18:08.156784       1 server.go:872] "Version info" version="v1.30.3"
	I0730 01:18:08.156797       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:18:08.158734       1 config.go:192] "Starting service config controller"
	I0730 01:18:08.158795       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 01:18:08.158841       1 config.go:101] "Starting endpoint slice config controller"
	I0730 01:18:08.158859       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 01:18:08.159498       1 config.go:319] "Starting node config controller"
	I0730 01:18:08.159526       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 01:18:08.259851       1 shared_informer.go:320] Caches are synced for node config
	I0730 01:18:08.259953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 01:18:08.260051       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e5c812257815dffd63ef88f9e50942e54d837ccd04bffeba282b4db95302f068] <==
	I0730 01:11:31.869943       1 server_linux.go:69] "Using iptables proxy"
	I0730 01:11:31.885455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.235"]
	I0730 01:11:31.916184       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0730 01:11:31.916279       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 01:11:31.916315       1 server_linux.go:165] "Using iptables Proxier"
	I0730 01:11:31.918541       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 01:11:31.919017       1 server.go:872] "Version info" version="v1.30.3"
	I0730 01:11:31.919046       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:11:31.920994       1 config.go:192] "Starting service config controller"
	I0730 01:11:31.921025       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 01:11:31.921048       1 config.go:101] "Starting endpoint slice config controller"
	I0730 01:11:31.921051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 01:11:31.921517       1 config.go:319] "Starting node config controller"
	I0730 01:11:31.921551       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 01:11:32.021454       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 01:11:32.021549       1 shared_informer.go:320] Caches are synced for service config
	I0730 01:11:32.021568       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f8bdfa3ecd417f0475d59819b3e159a30dffdef3fc91abb43cb8d6bf4d16dce] <==
	E0730 01:11:14.134500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 01:11:14.134601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 01:11:14.134624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 01:11:14.134713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 01:11:14.134735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 01:11:14.135130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 01:11:14.135288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 01:11:14.980795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0730 01:11:14.980842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 01:11:15.015121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0730 01:11:15.015172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0730 01:11:15.033808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 01:11:15.033838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 01:11:15.072417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 01:11:15.072533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 01:11:15.085326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 01:11:15.085485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 01:11:15.156140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 01:11:15.156261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 01:11:15.375186       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 01:11:15.375959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 01:11:15.450014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 01:11:15.450340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0730 01:11:15.726056       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0730 01:16:22.125774       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2308013e18c51ee0b02bd087c830d0028d9429af2c37fb834b3e28e4c543478d] <==
	W0730 01:18:08.030661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.030672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.030729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 01:18:08.030752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 01:18:08.030806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0730 01:18:08.030829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 01:18:08.030931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0730 01:18:08.030954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0730 01:18:08.031013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 01:18:08.031035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 01:18:08.031085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.031108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.031158       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0730 01:18:08.031180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0730 01:18:08.031239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 01:18:08.031261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 01:18:08.031319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.031341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.031416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0730 01:18:08.031438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0730 01:18:08.031492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 01:18:08.031512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 01:18:08.031567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 01:18:08.031587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0730 01:18:09.004944       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.497219    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-multinode-543365\" already exists" pod="kube-system/etcd-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.501499    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-543365\" already exists" pod="kube-system/kube-apiserver-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.504432    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-543365\" already exists" pod="kube-system/kube-controller-manager-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: E0730 01:18:11.505691    3805 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-543365\" already exists" pod="kube-system/kube-scheduler-multinode-543365"
	Jul 30 01:18:11 multinode-543365 kubelet[3805]: I0730 01:18:11.535529    3805 scope.go:117] "RemoveContainer" containerID="302e6de0ed6c4685a9ae49f42895d43b7c3c111520b5ced87e000e065b504887"
	Jul 30 01:19:10 multinode-543365 kubelet[3805]: E0730 01:19:10.393622    3805 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:19:10 multinode-543365 kubelet[3805]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 01:20:10 multinode-543365 kubelet[3805]: E0730 01:20:10.387506    3805 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:20:10 multinode-543365 kubelet[3805]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:20:10 multinode-543365 kubelet[3805]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:20:10 multinode-543365 kubelet[3805]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:20:10 multinode-543365 kubelet[3805]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 01:21:10 multinode-543365 kubelet[3805]: E0730 01:21:10.389446    3805 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:21:10 multinode-543365 kubelet[3805]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:21:10 multinode-543365 kubelet[3805]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:21:10 multinode-543365 kubelet[3805]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:21:10 multinode-543365 kubelet[3805]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 30 01:22:10 multinode-543365 kubelet[3805]: E0730 01:22:10.387374    3805 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 30 01:22:10 multinode-543365 kubelet[3805]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 30 01:22:10 multinode-543365 kubelet[3805]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 30 01:22:10 multinode-543365 kubelet[3805]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 30 01:22:10 multinode-543365 kubelet[3805]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 01:22:09.838181  537726 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19346-495103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-543365 -n multinode-543365
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-543365 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                    
x
+
TestPreload (240.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-637752 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0730 01:26:10.081237  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-637752 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m43.293147543s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-637752 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-637752 image pull gcr.io/k8s-minikube/busybox: (2.857398288s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-637752
E0730 01:28:42.934761  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-637752: (6.574260241s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-637752 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-637752 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.153930578s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-637752 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-30 01:29:53.885383383 +0000 UTC m=+5148.654985644
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-637752 -n test-preload-637752
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-637752 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-637752 logs -n 25: (1.17796146s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365 sudo cat                                       | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m03_multinode-543365.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt                       | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m02:/home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n                                                                 | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | multinode-543365-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-543365 ssh -n multinode-543365-m02 sudo cat                                   | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	|         | /home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-543365 node stop m03                                                          | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:13 UTC |
	| node    | multinode-543365 node start                                                             | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:13 UTC | 30 Jul 24 01:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-543365                                                                | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:14 UTC |                     |
	| stop    | -p multinode-543365                                                                     | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:14 UTC |                     |
	| start   | -p multinode-543365                                                                     | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:16 UTC | 30 Jul 24 01:19 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-543365                                                                | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:19 UTC |                     |
	| node    | multinode-543365 node delete                                                            | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:19 UTC | 30 Jul 24 01:19 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-543365 stop                                                                   | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:19 UTC |                     |
	| start   | -p multinode-543365                                                                     | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:22 UTC | 30 Jul 24 01:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-543365                                                                | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:25 UTC |                     |
	| start   | -p multinode-543365-m02                                                                 | multinode-543365-m02 | jenkins | v1.33.1 | 30 Jul 24 01:25 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-543365-m03                                                                 | multinode-543365-m03 | jenkins | v1.33.1 | 30 Jul 24 01:25 UTC | 30 Jul 24 01:25 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-543365                                                                 | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:25 UTC |                     |
	| delete  | -p multinode-543365-m03                                                                 | multinode-543365-m03 | jenkins | v1.33.1 | 30 Jul 24 01:25 UTC | 30 Jul 24 01:25 UTC |
	| delete  | -p multinode-543365                                                                     | multinode-543365     | jenkins | v1.33.1 | 30 Jul 24 01:25 UTC | 30 Jul 24 01:25 UTC |
	| start   | -p test-preload-637752                                                                  | test-preload-637752  | jenkins | v1.33.1 | 30 Jul 24 01:25 UTC | 30 Jul 24 01:28 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-637752 image pull                                                          | test-preload-637752  | jenkins | v1.33.1 | 30 Jul 24 01:28 UTC | 30 Jul 24 01:28 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-637752                                                                  | test-preload-637752  | jenkins | v1.33.1 | 30 Jul 24 01:28 UTC | 30 Jul 24 01:28 UTC |
	| start   | -p test-preload-637752                                                                  | test-preload-637752  | jenkins | v1.33.1 | 30 Jul 24 01:28 UTC | 30 Jul 24 01:29 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-637752 image list                                                          | test-preload-637752  | jenkins | v1.33.1 | 30 Jul 24 01:29 UTC | 30 Jul 24 01:29 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 01:28:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 01:28:49.529755  540360 out.go:291] Setting OutFile to fd 1 ...
	I0730 01:28:49.529873  540360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:28:49.529882  540360 out.go:304] Setting ErrFile to fd 2...
	I0730 01:28:49.529887  540360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:28:49.530068  540360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 01:28:49.530591  540360 out.go:298] Setting JSON to false
	I0730 01:28:49.531579  540360 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11472,"bootTime":1722291458,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 01:28:49.531642  540360 start.go:139] virtualization: kvm guest
	I0730 01:28:49.534726  540360 out.go:177] * [test-preload-637752] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 01:28:49.536074  540360 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 01:28:49.536111  540360 notify.go:220] Checking for updates...
	I0730 01:28:49.538555  540360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 01:28:49.539718  540360 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 01:28:49.540825  540360 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:28:49.542002  540360 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 01:28:49.543185  540360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 01:28:49.544673  540360 config.go:182] Loaded profile config "test-preload-637752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0730 01:28:49.545075  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:28:49.545133  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:28:49.559994  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0730 01:28:49.560422  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:28:49.561071  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:28:49.561097  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:28:49.561471  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:28:49.561726  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:28:49.563709  540360 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0730 01:28:49.564974  540360 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 01:28:49.565436  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:28:49.565483  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:28:49.580128  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0730 01:28:49.580626  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:28:49.581180  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:28:49.581207  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:28:49.581552  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:28:49.581740  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:28:49.618107  540360 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 01:28:49.619155  540360 start.go:297] selected driver: kvm2
	I0730 01:28:49.619167  540360 start.go:901] validating driver "kvm2" against &{Name:test-preload-637752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-637752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:28:49.619267  540360 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 01:28:49.619954  540360 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:28:49.620063  540360 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 01:28:49.634957  540360 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 01:28:49.635292  540360 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 01:28:49.635346  540360 cni.go:84] Creating CNI manager for ""
	I0730 01:28:49.635357  540360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:28:49.635423  540360 start.go:340] cluster config:
	{Name:test-preload-637752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-637752 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:28:49.635543  540360 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:28:49.637276  540360 out.go:177] * Starting "test-preload-637752" primary control-plane node in "test-preload-637752" cluster
	I0730 01:28:49.638443  540360 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0730 01:28:50.476941  540360 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0730 01:28:50.477000  540360 cache.go:56] Caching tarball of preloaded images
	I0730 01:28:50.477168  540360 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0730 01:28:50.478908  540360 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0730 01:28:50.479981  540360 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0730 01:28:50.577769  540360 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0730 01:29:00.996074  540360 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0730 01:29:00.996201  540360 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0730 01:29:01.986332  540360 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0730 01:29:01.986461  540360 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/config.json ...
	I0730 01:29:01.986688  540360 start.go:360] acquireMachinesLock for test-preload-637752: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 01:29:01.986751  540360 start.go:364] duration metric: took 41.513µs to acquireMachinesLock for "test-preload-637752"
	I0730 01:29:01.986766  540360 start.go:96] Skipping create...Using existing machine configuration
	I0730 01:29:01.986771  540360 fix.go:54] fixHost starting: 
	I0730 01:29:01.987081  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:29:01.987113  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:29:02.002055  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0730 01:29:02.002529  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:29:02.002985  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:29:02.003012  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:29:02.003398  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:29:02.003556  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:02.003675  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetState
	I0730 01:29:02.005240  540360 fix.go:112] recreateIfNeeded on test-preload-637752: state=Stopped err=<nil>
	I0730 01:29:02.005264  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	W0730 01:29:02.005446  540360 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 01:29:02.007575  540360 out.go:177] * Restarting existing kvm2 VM for "test-preload-637752" ...
	I0730 01:29:02.009076  540360 main.go:141] libmachine: (test-preload-637752) Calling .Start
	I0730 01:29:02.009247  540360 main.go:141] libmachine: (test-preload-637752) Ensuring networks are active...
	I0730 01:29:02.009907  540360 main.go:141] libmachine: (test-preload-637752) Ensuring network default is active
	I0730 01:29:02.010196  540360 main.go:141] libmachine: (test-preload-637752) Ensuring network mk-test-preload-637752 is active
	I0730 01:29:02.010587  540360 main.go:141] libmachine: (test-preload-637752) Getting domain xml...
	I0730 01:29:02.011267  540360 main.go:141] libmachine: (test-preload-637752) Creating domain...
	I0730 01:29:03.212359  540360 main.go:141] libmachine: (test-preload-637752) Waiting to get IP...
	I0730 01:29:03.213229  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:03.213598  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:03.213698  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:03.213595  540427 retry.go:31] will retry after 309.67187ms: waiting for machine to come up
	I0730 01:29:03.525209  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:03.525633  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:03.525677  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:03.525608  540427 retry.go:31] will retry after 243.862457ms: waiting for machine to come up
	I0730 01:29:03.771242  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:03.771594  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:03.771651  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:03.771562  540427 retry.go:31] will retry after 354.480103ms: waiting for machine to come up
	I0730 01:29:04.128263  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:04.128755  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:04.128780  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:04.128701  540427 retry.go:31] will retry after 517.044034ms: waiting for machine to come up
	I0730 01:29:04.647267  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:04.647620  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:04.647649  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:04.647574  540427 retry.go:31] will retry after 571.447631ms: waiting for machine to come up
	I0730 01:29:05.220297  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:05.220663  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:05.220694  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:05.220612  540427 retry.go:31] will retry after 595.671149ms: waiting for machine to come up
	I0730 01:29:05.818407  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:05.818739  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:05.818769  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:05.818695  540427 retry.go:31] will retry after 932.136007ms: waiting for machine to come up
	I0730 01:29:06.752753  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:06.753154  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:06.753188  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:06.753094  540427 retry.go:31] will retry after 960.410077ms: waiting for machine to come up
	I0730 01:29:07.714765  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:07.715202  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:07.715225  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:07.715180  540427 retry.go:31] will retry after 1.820353164s: waiting for machine to come up
	I0730 01:29:09.537791  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:09.538250  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:09.538278  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:09.538211  540427 retry.go:31] will retry after 1.623599458s: waiting for machine to come up
	I0730 01:29:11.164201  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:11.164736  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:11.164767  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:11.164646  540427 retry.go:31] will retry after 2.263274845s: waiting for machine to come up
	I0730 01:29:13.429625  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:13.430040  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:13.430064  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:13.429979  540427 retry.go:31] will retry after 3.029267557s: waiting for machine to come up
	I0730 01:29:16.463323  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:16.463704  540360 main.go:141] libmachine: (test-preload-637752) DBG | unable to find current IP address of domain test-preload-637752 in network mk-test-preload-637752
	I0730 01:29:16.463735  540360 main.go:141] libmachine: (test-preload-637752) DBG | I0730 01:29:16.463661  540427 retry.go:31] will retry after 2.947977769s: waiting for machine to come up
	I0730 01:29:19.415134  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.415593  540360 main.go:141] libmachine: (test-preload-637752) Found IP for machine: 192.168.39.146
	I0730 01:29:19.415617  540360 main.go:141] libmachine: (test-preload-637752) Reserving static IP address...
	I0730 01:29:19.415633  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has current primary IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.416012  540360 main.go:141] libmachine: (test-preload-637752) Reserved static IP address: 192.168.39.146
	I0730 01:29:19.416043  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "test-preload-637752", mac: "52:54:00:5f:e6:ad", ip: "192.168.39.146"} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.416056  540360 main.go:141] libmachine: (test-preload-637752) Waiting for SSH to be available...
	I0730 01:29:19.416074  540360 main.go:141] libmachine: (test-preload-637752) DBG | skip adding static IP to network mk-test-preload-637752 - found existing host DHCP lease matching {name: "test-preload-637752", mac: "52:54:00:5f:e6:ad", ip: "192.168.39.146"}
	I0730 01:29:19.416084  540360 main.go:141] libmachine: (test-preload-637752) DBG | Getting to WaitForSSH function...
	I0730 01:29:19.418539  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.418848  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.418878  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.418989  540360 main.go:141] libmachine: (test-preload-637752) DBG | Using SSH client type: external
	I0730 01:29:19.419013  540360 main.go:141] libmachine: (test-preload-637752) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa (-rw-------)
	I0730 01:29:19.419082  540360 main.go:141] libmachine: (test-preload-637752) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 01:29:19.419112  540360 main.go:141] libmachine: (test-preload-637752) DBG | About to run SSH command:
	I0730 01:29:19.419128  540360 main.go:141] libmachine: (test-preload-637752) DBG | exit 0
	I0730 01:29:19.548693  540360 main.go:141] libmachine: (test-preload-637752) DBG | SSH cmd err, output: <nil>: 
	I0730 01:29:19.549082  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetConfigRaw
	I0730 01:29:19.549721  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetIP
	I0730 01:29:19.552142  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.552488  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.552521  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.552778  540360 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/config.json ...
	I0730 01:29:19.553007  540360 machine.go:94] provisionDockerMachine start ...
	I0730 01:29:19.553025  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:19.553220  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:19.555499  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.555800  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.555832  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.555951  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:19.556130  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:19.556299  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:19.556463  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:19.556647  540360 main.go:141] libmachine: Using SSH client type: native
	I0730 01:29:19.556951  540360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0730 01:29:19.556968  540360 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 01:29:19.672645  540360 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0730 01:29:19.672673  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetMachineName
	I0730 01:29:19.672954  540360 buildroot.go:166] provisioning hostname "test-preload-637752"
	I0730 01:29:19.673003  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetMachineName
	I0730 01:29:19.673230  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:19.675663  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.676002  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.676024  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.676127  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:19.676295  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:19.676436  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:19.676551  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:19.676692  540360 main.go:141] libmachine: Using SSH client type: native
	I0730 01:29:19.676903  540360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0730 01:29:19.676916  540360 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-637752 && echo "test-preload-637752" | sudo tee /etc/hostname
	I0730 01:29:19.802250  540360 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-637752
	
	I0730 01:29:19.802285  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:19.804834  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.805170  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.805201  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.805318  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:19.805521  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:19.805667  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:19.805811  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:19.805961  540360 main.go:141] libmachine: Using SSH client type: native
	I0730 01:29:19.806129  540360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0730 01:29:19.806145  540360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-637752' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-637752/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-637752' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 01:29:19.924716  540360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:29:19.924753  540360 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 01:29:19.924781  540360 buildroot.go:174] setting up certificates
	I0730 01:29:19.924793  540360 provision.go:84] configureAuth start
	I0730 01:29:19.924802  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetMachineName
	I0730 01:29:19.925119  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetIP
	I0730 01:29:19.928088  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.928425  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.928448  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.928604  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:19.930677  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.930999  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:19.931028  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:19.931165  540360 provision.go:143] copyHostCerts
	I0730 01:29:19.931234  540360 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 01:29:19.931251  540360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:29:19.931320  540360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 01:29:19.931409  540360 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 01:29:19.931417  540360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:29:19.931440  540360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 01:29:19.931493  540360 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 01:29:19.931500  540360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:29:19.931522  540360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 01:29:19.931570  540360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.test-preload-637752 san=[127.0.0.1 192.168.39.146 localhost minikube test-preload-637752]
	I0730 01:29:20.085314  540360 provision.go:177] copyRemoteCerts
	I0730 01:29:20.085374  540360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 01:29:20.085406  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:20.088103  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.088439  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:20.088463  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.088660  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:20.088901  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.089051  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:20.089214  540360 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa Username:docker}
	I0730 01:29:20.174469  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 01:29:20.197667  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0730 01:29:20.220048  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 01:29:20.242632  540360 provision.go:87] duration metric: took 317.822011ms to configureAuth
	I0730 01:29:20.242664  540360 buildroot.go:189] setting minikube options for container-runtime
	I0730 01:29:20.242880  540360 config.go:182] Loaded profile config "test-preload-637752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0730 01:29:20.242976  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:20.245681  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.246058  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:20.246088  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.246258  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:20.246491  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.246658  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.246833  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:20.247028  540360 main.go:141] libmachine: Using SSH client type: native
	I0730 01:29:20.247216  540360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0730 01:29:20.247236  540360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 01:29:20.507009  540360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 01:29:20.507051  540360 machine.go:97] duration metric: took 954.029536ms to provisionDockerMachine
	I0730 01:29:20.507063  540360 start.go:293] postStartSetup for "test-preload-637752" (driver="kvm2")
	I0730 01:29:20.507086  540360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 01:29:20.507105  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:20.507451  540360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 01:29:20.507484  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:20.510089  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.510422  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:20.510453  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.510625  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:20.510828  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.511018  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:20.511174  540360 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa Username:docker}
	I0730 01:29:20.594784  540360 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 01:29:20.598853  540360 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 01:29:20.598882  540360 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 01:29:20.598967  540360 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 01:29:20.599045  540360 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 01:29:20.599133  540360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 01:29:20.607665  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:29:20.630151  540360 start.go:296] duration metric: took 123.060927ms for postStartSetup
	I0730 01:29:20.630194  540360 fix.go:56] duration metric: took 18.643422794s for fixHost
	I0730 01:29:20.630217  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:20.632839  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.633172  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:20.633204  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.633405  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:20.633626  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.633855  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.634090  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:20.634308  540360 main.go:141] libmachine: Using SSH client type: native
	I0730 01:29:20.634505  540360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0730 01:29:20.634518  540360 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 01:29:20.745050  540360 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722302960.720507266
	
	I0730 01:29:20.745071  540360 fix.go:216] guest clock: 1722302960.720507266
	I0730 01:29:20.745078  540360 fix.go:229] Guest: 2024-07-30 01:29:20.720507266 +0000 UTC Remote: 2024-07-30 01:29:20.630198782 +0000 UTC m=+31.135405258 (delta=90.308484ms)
	I0730 01:29:20.745125  540360 fix.go:200] guest clock delta is within tolerance: 90.308484ms
	I0730 01:29:20.745133  540360 start.go:83] releasing machines lock for "test-preload-637752", held for 18.758371798s
	I0730 01:29:20.745159  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:20.745480  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetIP
	I0730 01:29:20.747806  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.748157  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:20.748185  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.748336  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:20.748879  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:20.749063  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:20.749172  540360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 01:29:20.749225  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:20.749331  540360 ssh_runner.go:195] Run: cat /version.json
	I0730 01:29:20.749359  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:20.751821  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.752093  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.752136  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:20.752164  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.752319  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:20.752445  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:20.752476  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:20.752484  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.752637  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:20.752641  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:20.752801  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:20.752953  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:20.752976  540360 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa Username:docker}
	I0730 01:29:20.753225  540360 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa Username:docker}
	I0730 01:29:20.864743  540360 ssh_runner.go:195] Run: systemctl --version
	I0730 01:29:20.870476  540360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 01:29:21.009799  540360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 01:29:21.015913  540360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 01:29:21.015995  540360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 01:29:21.030984  540360 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 01:29:21.031016  540360 start.go:495] detecting cgroup driver to use...
	I0730 01:29:21.031099  540360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 01:29:21.046164  540360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 01:29:21.059164  540360 docker.go:217] disabling cri-docker service (if available) ...
	I0730 01:29:21.059225  540360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 01:29:21.071791  540360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 01:29:21.084884  540360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 01:29:21.191436  540360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 01:29:21.327739  540360 docker.go:233] disabling docker service ...
	I0730 01:29:21.327821  540360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 01:29:21.341306  540360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 01:29:21.353637  540360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 01:29:21.487126  540360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 01:29:21.601297  540360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 01:29:21.614407  540360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 01:29:21.631527  540360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0730 01:29:21.631600  540360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:29:21.640804  540360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 01:29:21.640878  540360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:29:21.650342  540360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:29:21.659699  540360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:29:21.668952  540360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 01:29:21.678538  540360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:29:21.687747  540360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:29:21.702973  540360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:29:21.712378  540360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 01:29:21.720588  540360 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 01:29:21.720633  540360 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 01:29:21.732832  540360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 01:29:21.741785  540360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:29:21.854389  540360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 01:29:21.976595  540360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 01:29:21.976681  540360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 01:29:21.981741  540360 start.go:563] Will wait 60s for crictl version
	I0730 01:29:21.981798  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:21.985174  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 01:29:22.024092  540360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 01:29:22.024191  540360 ssh_runner.go:195] Run: crio --version
	I0730 01:29:22.051516  540360 ssh_runner.go:195] Run: crio --version
	I0730 01:29:22.081115  540360 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0730 01:29:22.082539  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetIP
	I0730 01:29:22.085163  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:22.085488  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:22.085517  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:22.085719  540360 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0730 01:29:22.089541  540360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 01:29:22.101333  540360 kubeadm.go:883] updating cluster {Name:test-preload-637752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-637752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 01:29:22.101453  540360 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0730 01:29:22.101493  540360 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:29:22.139964  540360 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0730 01:29:22.140033  540360 ssh_runner.go:195] Run: which lz4
	I0730 01:29:22.143865  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0730 01:29:22.147746  540360 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0730 01:29:22.147784  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0730 01:29:23.508377  540360 crio.go:462] duration metric: took 1.364538846s to copy over tarball
	I0730 01:29:23.508461  540360 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0730 01:29:25.771064  540360 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.262566329s)
	I0730 01:29:25.771100  540360 crio.go:469] duration metric: took 2.262692763s to extract the tarball
	I0730 01:29:25.771111  540360 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0730 01:29:25.811342  540360 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:29:25.851049  540360 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0730 01:29:25.851079  540360 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0730 01:29:25.851148  540360 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:29:25.851174  540360 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0730 01:29:25.851204  540360 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0730 01:29:25.851148  540360 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0730 01:29:25.851234  540360 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0730 01:29:25.851215  540360 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0730 01:29:25.851261  540360 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0730 01:29:25.851214  540360 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0730 01:29:25.852804  540360 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0730 01:29:25.852823  540360 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:29:25.852844  540360 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0730 01:29:25.852847  540360 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0730 01:29:25.852812  540360 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0730 01:29:25.852817  540360 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0730 01:29:25.852847  540360 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0730 01:29:25.852812  540360 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0730 01:29:26.068464  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0730 01:29:26.071622  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0730 01:29:26.071896  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0730 01:29:26.083118  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0730 01:29:26.091058  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0730 01:29:26.097738  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0730 01:29:26.116724  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0730 01:29:26.135330  540360 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0730 01:29:26.135408  540360 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0730 01:29:26.135463  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:26.202644  540360 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0730 01:29:26.202698  540360 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0730 01:29:26.202700  540360 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0730 01:29:26.202732  540360 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0730 01:29:26.202752  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:26.202776  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:26.207690  540360 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0730 01:29:26.207734  540360 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0730 01:29:26.207779  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:26.227212  540360 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0730 01:29:26.227251  540360 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0730 01:29:26.227284  540360 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0730 01:29:26.227294  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:26.227316  540360 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0730 01:29:26.227366  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:26.242898  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0730 01:29:26.242974  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0730 01:29:26.242986  540360 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0730 01:29:26.243023  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0730 01:29:26.243024  540360 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0730 01:29:26.242974  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0730 01:29:26.243070  540360 ssh_runner.go:195] Run: which crictl
	I0730 01:29:26.243097  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0730 01:29:26.243070  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0730 01:29:26.365848  540360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0730 01:29:26.365901  540360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0730 01:29:26.365931  540360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0730 01:29:26.365962  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0730 01:29:26.366018  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0730 01:29:26.377820  540360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0730 01:29:26.377842  540360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0730 01:29:26.377883  540360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0730 01:29:26.377920  540360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0730 01:29:26.377931  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0730 01:29:26.377933  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0730 01:29:26.377960  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0730 01:29:26.377997  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0730 01:29:26.409459  540360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0730 01:29:26.409478  540360 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0730 01:29:26.409494  540360 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0730 01:29:26.409519  540360 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0730 01:29:26.409546  540360 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0730 01:29:26.409567  540360 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0730 01:29:26.409570  540360 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0730 01:29:26.409607  540360 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0730 01:29:26.409614  540360 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0730 01:29:26.409545  540360 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0730 01:29:26.783789  540360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:29:29.565301  540360 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.155711234s)
	I0730 01:29:29.565344  540360 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0730 01:29:29.565343  540360 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (3.155751224s)
	I0730 01:29:29.565371  540360 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0730 01:29:29.565377  540360 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0730 01:29:29.565417  540360 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.781590525s)
	I0730 01:29:29.565426  540360 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0730 01:29:29.910543  540360 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0730 01:29:29.910591  540360 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0730 01:29:29.910642  540360 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0730 01:29:30.354359  540360 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0730 01:29:30.354410  540360 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0730 01:29:30.354466  540360 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0730 01:29:31.197181  540360 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0730 01:29:31.197234  540360 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0730 01:29:31.197294  540360 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0730 01:29:31.935132  540360 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0730 01:29:31.935186  540360 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0730 01:29:31.935265  540360 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0730 01:29:32.073808  540360 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0730 01:29:32.073854  540360 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0730 01:29:32.073899  540360 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0730 01:29:34.222273  540360 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.148352642s)
	I0730 01:29:34.222304  540360 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0730 01:29:34.222328  540360 cache_images.go:123] Successfully loaded all cached images
	I0730 01:29:34.222335  540360 cache_images.go:92] duration metric: took 8.37124248s to LoadCachedImages
	I0730 01:29:34.222346  540360 kubeadm.go:934] updating node { 192.168.39.146 8443 v1.24.4 crio true true} ...
	I0730 01:29:34.222470  540360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-637752 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-637752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 01:29:34.222543  540360 ssh_runner.go:195] Run: crio config
	I0730 01:29:34.268273  540360 cni.go:84] Creating CNI manager for ""
	I0730 01:29:34.268292  540360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:29:34.268300  540360 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 01:29:34.268319  540360 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-637752 NodeName:test-preload-637752 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 01:29:34.268446  540360 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-637752"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 01:29:34.268506  540360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0730 01:29:34.278056  540360 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 01:29:34.278122  540360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 01:29:34.286986  540360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0730 01:29:34.302631  540360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 01:29:34.318533  540360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0730 01:29:34.335232  540360 ssh_runner.go:195] Run: grep 192.168.39.146	control-plane.minikube.internal$ /etc/hosts
	I0730 01:29:34.339013  540360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 01:29:34.350621  540360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:29:34.474203  540360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 01:29:34.490664  540360 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752 for IP: 192.168.39.146
	I0730 01:29:34.490696  540360 certs.go:194] generating shared ca certs ...
	I0730 01:29:34.490714  540360 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:29:34.490886  540360 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 01:29:34.490942  540360 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 01:29:34.490955  540360 certs.go:256] generating profile certs ...
	I0730 01:29:34.491072  540360 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/client.key
	I0730 01:29:34.491147  540360 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/apiserver.key.02c5fe21
	I0730 01:29:34.491200  540360 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/proxy-client.key
	I0730 01:29:34.491343  540360 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 01:29:34.491386  540360 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 01:29:34.491399  540360 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 01:29:34.491429  540360 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 01:29:34.491460  540360 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 01:29:34.491491  540360 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 01:29:34.491540  540360 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:29:34.492337  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 01:29:34.531766  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 01:29:34.560720  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 01:29:34.594673  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 01:29:34.630488  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0730 01:29:34.660630  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 01:29:34.696809  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 01:29:34.718013  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0730 01:29:34.739386  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 01:29:34.761231  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 01:29:34.783651  540360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 01:29:34.806049  540360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 01:29:34.821865  540360 ssh_runner.go:195] Run: openssl version
	I0730 01:29:34.827553  540360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 01:29:34.837697  540360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:29:34.841897  540360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:29:34.841938  540360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:29:34.847300  540360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 01:29:34.857353  540360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 01:29:34.867162  540360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 01:29:34.871488  540360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:29:34.871546  540360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 01:29:34.876805  540360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 01:29:34.887081  540360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 01:29:34.897079  540360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 01:29:34.901215  540360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:29:34.901284  540360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 01:29:34.906634  540360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 01:29:34.916608  540360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:29:34.921075  540360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 01:29:34.926777  540360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 01:29:34.932234  540360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 01:29:34.938161  540360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 01:29:34.943688  540360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 01:29:34.949036  540360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 01:29:34.954288  540360 kubeadm.go:392] StartCluster: {Name:test-preload-637752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-637752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:29:34.954371  540360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 01:29:34.954411  540360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 01:29:34.993936  540360 cri.go:89] found id: ""
	I0730 01:29:34.994059  540360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 01:29:35.004037  540360 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0730 01:29:35.004064  540360 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0730 01:29:35.004138  540360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0730 01:29:35.013588  540360 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0730 01:29:35.014050  540360 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-637752" does not appear in /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 01:29:35.014180  540360 kubeconfig.go:62] /home/jenkins/minikube-integration/19346-495103/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-637752" cluster setting kubeconfig missing "test-preload-637752" context setting]
	I0730 01:29:35.014516  540360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/kubeconfig: {Name:mk6ecf4e5b07b810f1fa2b9790857d7586f0cf41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:29:35.015117  540360 kapi.go:59] client config for test-preload-637752: &rest.Config{Host:"https://192.168.39.146:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0730 01:29:35.015759  540360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0730 01:29:35.024827  540360 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.146
	I0730 01:29:35.024857  540360 kubeadm.go:1160] stopping kube-system containers ...
	I0730 01:29:35.024870  540360 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0730 01:29:35.024935  540360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 01:29:35.058316  540360 cri.go:89] found id: ""
	I0730 01:29:35.058392  540360 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0730 01:29:35.073683  540360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 01:29:35.082952  540360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 01:29:35.082972  540360 kubeadm.go:157] found existing configuration files:
	
	I0730 01:29:35.083033  540360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 01:29:35.091796  540360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 01:29:35.091864  540360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 01:29:35.100958  540360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 01:29:35.109549  540360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 01:29:35.109628  540360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 01:29:35.118577  540360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 01:29:35.127158  540360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 01:29:35.127273  540360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 01:29:35.136270  540360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 01:29:35.144865  540360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 01:29:35.144929  540360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 01:29:35.153830  540360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 01:29:35.162985  540360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0730 01:29:35.260026  540360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0730 01:29:36.058205  540360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0730 01:29:36.315683  540360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0730 01:29:36.378290  540360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0730 01:29:36.476393  540360 api_server.go:52] waiting for apiserver process to appear ...
	I0730 01:29:36.476506  540360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 01:29:36.976834  540360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 01:29:37.477303  540360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 01:29:37.493089  540360 api_server.go:72] duration metric: took 1.016713171s to wait for apiserver process to appear ...
	I0730 01:29:37.493119  540360 api_server.go:88] waiting for apiserver healthz status ...
	I0730 01:29:37.493145  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:37.493693  540360 api_server.go:269] stopped: https://192.168.39.146:8443/healthz: Get "https://192.168.39.146:8443/healthz": dial tcp 192.168.39.146:8443: connect: connection refused
	I0730 01:29:37.993462  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:37.994069  540360 api_server.go:269] stopped: https://192.168.39.146:8443/healthz: Get "https://192.168.39.146:8443/healthz": dial tcp 192.168.39.146:8443: connect: connection refused
	I0730 01:29:38.493621  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:41.658640  540360 api_server.go:279] https://192.168.39.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0730 01:29:41.658675  540360 api_server.go:103] status: https://192.168.39.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0730 01:29:41.658690  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:41.719459  540360 api_server.go:279] https://192.168.39.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0730 01:29:41.719493  540360 api_server.go:103] status: https://192.168.39.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0730 01:29:41.993982  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:42.000619  540360 api_server.go:279] https://192.168.39.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 01:29:42.000651  540360 api_server.go:103] status: https://192.168.39.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 01:29:42.493293  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:42.500651  540360 api_server.go:279] https://192.168.39.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 01:29:42.500677  540360 api_server.go:103] status: https://192.168.39.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 01:29:42.993239  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:42.999089  540360 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0730 01:29:43.007138  540360 api_server.go:141] control plane version: v1.24.4
	I0730 01:29:43.007287  540360 api_server.go:131] duration metric: took 5.514152953s to wait for apiserver health ...
	I0730 01:29:43.007304  540360 cni.go:84] Creating CNI manager for ""
	I0730 01:29:43.007312  540360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:29:43.009430  540360 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0730 01:29:43.010808  540360 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0730 01:29:43.022576  540360 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0730 01:29:43.051983  540360 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 01:29:43.061818  540360 system_pods.go:59] 7 kube-system pods found
	I0730 01:29:43.061854  540360 system_pods.go:61] "coredns-6d4b75cb6d-bv6tr" [04fea64b-4023-4f0c-aa89-695fb909b5ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0730 01:29:43.061862  540360 system_pods.go:61] "etcd-test-preload-637752" [4503a8fb-e1fe-40b7-a313-f124430ee8c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0730 01:29:43.061869  540360 system_pods.go:61] "kube-apiserver-test-preload-637752" [c11b42ee-8329-4d66-8326-d2b564c9fa04] Running
	I0730 01:29:43.061873  540360 system_pods.go:61] "kube-controller-manager-test-preload-637752" [13e893c6-619c-4dab-afa4-73384f7923fa] Running
	I0730 01:29:43.061879  540360 system_pods.go:61] "kube-proxy-7gpc8" [5879398a-a6fa-4ffd-93b7-4be06a194738] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0730 01:29:43.061888  540360 system_pods.go:61] "kube-scheduler-test-preload-637752" [e6d70dd8-021b-4964-8b03-5f8342df7381] Running
	I0730 01:29:43.061893  540360 system_pods.go:61] "storage-provisioner" [4a39557e-ad6d-426a-8452-d67e5c1f31a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0730 01:29:43.061899  540360 system_pods.go:74] duration metric: took 9.894333ms to wait for pod list to return data ...
	I0730 01:29:43.061907  540360 node_conditions.go:102] verifying NodePressure condition ...
	I0730 01:29:43.065488  540360 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 01:29:43.065513  540360 node_conditions.go:123] node cpu capacity is 2
	I0730 01:29:43.065525  540360 node_conditions.go:105] duration metric: took 3.613213ms to run NodePressure ...
	I0730 01:29:43.065542  540360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0730 01:29:43.225205  540360 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0730 01:29:43.230121  540360 kubeadm.go:739] kubelet initialised
	I0730 01:29:43.230144  540360 kubeadm.go:740] duration metric: took 4.914242ms waiting for restarted kubelet to initialise ...
	I0730 01:29:43.230153  540360 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 01:29:43.234643  540360 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-bv6tr" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:43.249252  540360 pod_ready.go:97] node "test-preload-637752" hosting pod "coredns-6d4b75cb6d-bv6tr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.249290  540360 pod_ready.go:81] duration metric: took 14.60982ms for pod "coredns-6d4b75cb6d-bv6tr" in "kube-system" namespace to be "Ready" ...
	E0730 01:29:43.249304  540360 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-637752" hosting pod "coredns-6d4b75cb6d-bv6tr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.249312  540360 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:43.255346  540360 pod_ready.go:97] node "test-preload-637752" hosting pod "etcd-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.255377  540360 pod_ready.go:81] duration metric: took 6.04861ms for pod "etcd-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	E0730 01:29:43.255386  540360 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-637752" hosting pod "etcd-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.255392  540360 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:43.260997  540360 pod_ready.go:97] node "test-preload-637752" hosting pod "kube-apiserver-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.261021  540360 pod_ready.go:81] duration metric: took 5.61956ms for pod "kube-apiserver-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	E0730 01:29:43.261030  540360 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-637752" hosting pod "kube-apiserver-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.261037  540360 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:43.457137  540360 pod_ready.go:97] node "test-preload-637752" hosting pod "kube-controller-manager-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.457174  540360 pod_ready.go:81] duration metric: took 196.126988ms for pod "kube-controller-manager-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	E0730 01:29:43.457187  540360 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-637752" hosting pod "kube-controller-manager-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.457196  540360 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7gpc8" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:43.856894  540360 pod_ready.go:97] node "test-preload-637752" hosting pod "kube-proxy-7gpc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.856929  540360 pod_ready.go:81] duration metric: took 399.720659ms for pod "kube-proxy-7gpc8" in "kube-system" namespace to be "Ready" ...
	E0730 01:29:43.856943  540360 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-637752" hosting pod "kube-proxy-7gpc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:43.856952  540360 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:44.256124  540360 pod_ready.go:97] node "test-preload-637752" hosting pod "kube-scheduler-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:44.256151  540360 pod_ready.go:81] duration metric: took 399.191602ms for pod "kube-scheduler-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	E0730 01:29:44.256162  540360 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-637752" hosting pod "kube-scheduler-test-preload-637752" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:44.256169  540360 pod_ready.go:38] duration metric: took 1.02600871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 01:29:44.256187  540360 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0730 01:29:44.268322  540360 ops.go:34] apiserver oom_adj: -16
	I0730 01:29:44.268357  540360 kubeadm.go:597] duration metric: took 9.264284421s to restartPrimaryControlPlane
	I0730 01:29:44.268368  540360 kubeadm.go:394] duration metric: took 9.314089278s to StartCluster
	I0730 01:29:44.268392  540360 settings.go:142] acquiring lock: {Name:mk89b2537c1ec20302d90ab73b167422bb363b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:29:44.268464  540360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 01:29:44.269162  540360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/kubeconfig: {Name:mk6ecf4e5b07b810f1fa2b9790857d7586f0cf41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:29:44.269399  540360 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 01:29:44.269514  540360 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0730 01:29:44.269578  540360 addons.go:69] Setting storage-provisioner=true in profile "test-preload-637752"
	I0730 01:29:44.269600  540360 addons.go:234] Setting addon storage-provisioner=true in "test-preload-637752"
	W0730 01:29:44.269606  540360 addons.go:243] addon storage-provisioner should already be in state true
	I0730 01:29:44.269606  540360 addons.go:69] Setting default-storageclass=true in profile "test-preload-637752"
	I0730 01:29:44.269651  540360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-637752"
	I0730 01:29:44.269693  540360 config.go:182] Loaded profile config "test-preload-637752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0730 01:29:44.269652  540360 host.go:66] Checking if "test-preload-637752" exists ...
	I0730 01:29:44.270078  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:29:44.270126  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:29:44.270161  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:29:44.270195  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:29:44.271236  540360 out.go:177] * Verifying Kubernetes components...
	I0730 01:29:44.272774  540360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:29:44.285777  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0730 01:29:44.285777  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34927
	I0730 01:29:44.286347  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:29:44.286397  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:29:44.286830  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:29:44.286855  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:29:44.286952  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:29:44.286966  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:29:44.287183  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:29:44.287361  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:29:44.287385  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetState
	I0730 01:29:44.287938  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:29:44.288001  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:29:44.290124  540360 kapi.go:59] client config for test-preload-637752: &rest.Config{Host:"https://192.168.39.146:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/client.crt", KeyFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/profiles/test-preload-637752/client.key", CAFile:"/home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0730 01:29:44.290511  540360 addons.go:234] Setting addon default-storageclass=true in "test-preload-637752"
	W0730 01:29:44.290533  540360 addons.go:243] addon default-storageclass should already be in state true
	I0730 01:29:44.290563  540360 host.go:66] Checking if "test-preload-637752" exists ...
	I0730 01:29:44.290937  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:29:44.291011  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:29:44.304354  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0730 01:29:44.304893  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:29:44.305489  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:29:44.305517  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:29:44.305813  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:29:44.306033  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetState
	I0730 01:29:44.306507  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0730 01:29:44.306871  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:29:44.307358  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:29:44.307383  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:29:44.307796  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:44.307866  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:29:44.308489  540360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:29:44.308537  540360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:29:44.310271  540360 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:29:44.311801  540360 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 01:29:44.311822  540360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0730 01:29:44.311842  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:44.315364  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:44.315862  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:44.315892  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:44.316033  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:44.316208  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:44.316374  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:44.316512  540360 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa Username:docker}
	I0730 01:29:44.324520  540360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0730 01:29:44.324990  540360 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:29:44.325474  540360 main.go:141] libmachine: Using API Version  1
	I0730 01:29:44.325501  540360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:29:44.325859  540360 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:29:44.326050  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetState
	I0730 01:29:44.327633  540360 main.go:141] libmachine: (test-preload-637752) Calling .DriverName
	I0730 01:29:44.327860  540360 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0730 01:29:44.327876  540360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0730 01:29:44.327893  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHHostname
	I0730 01:29:44.330429  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:44.330829  540360 main.go:141] libmachine: (test-preload-637752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e6:ad", ip: ""} in network mk-test-preload-637752: {Iface:virbr1 ExpiryTime:2024-07-30 02:29:12 +0000 UTC Type:0 Mac:52:54:00:5f:e6:ad Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:test-preload-637752 Clientid:01:52:54:00:5f:e6:ad}
	I0730 01:29:44.330851  540360 main.go:141] libmachine: (test-preload-637752) DBG | domain test-preload-637752 has defined IP address 192.168.39.146 and MAC address 52:54:00:5f:e6:ad in network mk-test-preload-637752
	I0730 01:29:44.331060  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHPort
	I0730 01:29:44.331278  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHKeyPath
	I0730 01:29:44.331435  540360 main.go:141] libmachine: (test-preload-637752) Calling .GetSSHUsername
	I0730 01:29:44.331560  540360 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/test-preload-637752/id_rsa Username:docker}
	I0730 01:29:44.436578  540360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 01:29:44.451584  540360 node_ready.go:35] waiting up to 6m0s for node "test-preload-637752" to be "Ready" ...
	I0730 01:29:44.561400  540360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 01:29:44.565549  540360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0730 01:29:45.635940  540360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.070357143s)
	I0730 01:29:45.636009  540360 main.go:141] libmachine: Making call to close driver server
	I0730 01:29:45.636023  540360 main.go:141] libmachine: (test-preload-637752) Calling .Close
	I0730 01:29:45.636061  540360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.07462308s)
	I0730 01:29:45.636104  540360 main.go:141] libmachine: Making call to close driver server
	I0730 01:29:45.636117  540360 main.go:141] libmachine: (test-preload-637752) Calling .Close
	I0730 01:29:45.636389  540360 main.go:141] libmachine: Successfully made call to close driver server
	I0730 01:29:45.636405  540360 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 01:29:45.636415  540360 main.go:141] libmachine: Making call to close driver server
	I0730 01:29:45.636422  540360 main.go:141] libmachine: (test-preload-637752) Calling .Close
	I0730 01:29:45.636443  540360 main.go:141] libmachine: (test-preload-637752) DBG | Closing plugin on server side
	I0730 01:29:45.636463  540360 main.go:141] libmachine: Successfully made call to close driver server
	I0730 01:29:45.636476  540360 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 01:29:45.636485  540360 main.go:141] libmachine: Making call to close driver server
	I0730 01:29:45.636497  540360 main.go:141] libmachine: (test-preload-637752) Calling .Close
	I0730 01:29:45.636645  540360 main.go:141] libmachine: Successfully made call to close driver server
	I0730 01:29:45.636674  540360 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 01:29:45.636673  540360 main.go:141] libmachine: (test-preload-637752) DBG | Closing plugin on server side
	I0730 01:29:45.636874  540360 main.go:141] libmachine: (test-preload-637752) DBG | Closing plugin on server side
	I0730 01:29:45.636892  540360 main.go:141] libmachine: Successfully made call to close driver server
	I0730 01:29:45.636902  540360 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 01:29:45.644810  540360 main.go:141] libmachine: Making call to close driver server
	I0730 01:29:45.644826  540360 main.go:141] libmachine: (test-preload-637752) Calling .Close
	I0730 01:29:45.645053  540360 main.go:141] libmachine: Successfully made call to close driver server
	I0730 01:29:45.645072  540360 main.go:141] libmachine: Making call to close connection to plugin binary
	I0730 01:29:45.645091  540360 main.go:141] libmachine: (test-preload-637752) DBG | Closing plugin on server side
	I0730 01:29:45.646941  540360 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0730 01:29:45.648318  540360 addons.go:510] duration metric: took 1.378814493s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0730 01:29:46.455625  540360 node_ready.go:53] node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:48.455665  540360 node_ready.go:53] node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:50.455786  540360 node_ready.go:53] node "test-preload-637752" has status "Ready":"False"
	I0730 01:29:51.955103  540360 node_ready.go:49] node "test-preload-637752" has status "Ready":"True"
	I0730 01:29:51.955128  540360 node_ready.go:38] duration metric: took 7.503511175s for node "test-preload-637752" to be "Ready" ...
	I0730 01:29:51.955145  540360 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 01:29:51.959902  540360 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-bv6tr" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.965671  540360 pod_ready.go:92] pod "coredns-6d4b75cb6d-bv6tr" in "kube-system" namespace has status "Ready":"True"
	I0730 01:29:51.965692  540360 pod_ready.go:81] duration metric: took 5.767642ms for pod "coredns-6d4b75cb6d-bv6tr" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.965700  540360 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.970872  540360 pod_ready.go:92] pod "etcd-test-preload-637752" in "kube-system" namespace has status "Ready":"True"
	I0730 01:29:51.970888  540360 pod_ready.go:81] duration metric: took 5.182267ms for pod "etcd-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.970895  540360 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.975697  540360 pod_ready.go:92] pod "kube-apiserver-test-preload-637752" in "kube-system" namespace has status "Ready":"True"
	I0730 01:29:51.975718  540360 pod_ready.go:81] duration metric: took 4.815604ms for pod "kube-apiserver-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.975730  540360 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.980320  540360 pod_ready.go:92] pod "kube-controller-manager-test-preload-637752" in "kube-system" namespace has status "Ready":"True"
	I0730 01:29:51.980340  540360 pod_ready.go:81] duration metric: took 4.601352ms for pod "kube-controller-manager-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:51.980351  540360 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gpc8" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:52.355190  540360 pod_ready.go:92] pod "kube-proxy-7gpc8" in "kube-system" namespace has status "Ready":"True"
	I0730 01:29:52.355221  540360 pod_ready.go:81] duration metric: took 374.857171ms for pod "kube-proxy-7gpc8" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:52.355233  540360 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:52.756393  540360 pod_ready.go:92] pod "kube-scheduler-test-preload-637752" in "kube-system" namespace has status "Ready":"True"
	I0730 01:29:52.756420  540360 pod_ready.go:81] duration metric: took 401.179931ms for pod "kube-scheduler-test-preload-637752" in "kube-system" namespace to be "Ready" ...
	I0730 01:29:52.756433  540360 pod_ready.go:38] duration metric: took 801.278006ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 01:29:52.756448  540360 api_server.go:52] waiting for apiserver process to appear ...
	I0730 01:29:52.756525  540360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 01:29:52.771125  540360 api_server.go:72] duration metric: took 8.501695604s to wait for apiserver process to appear ...
	I0730 01:29:52.771154  540360 api_server.go:88] waiting for apiserver healthz status ...
	I0730 01:29:52.771174  540360 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0730 01:29:52.776389  540360 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0730 01:29:52.777709  540360 api_server.go:141] control plane version: v1.24.4
	I0730 01:29:52.777732  540360 api_server.go:131] duration metric: took 6.570642ms to wait for apiserver health ...
	I0730 01:29:52.777740  540360 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 01:29:52.957515  540360 system_pods.go:59] 7 kube-system pods found
	I0730 01:29:52.957551  540360 system_pods.go:61] "coredns-6d4b75cb6d-bv6tr" [04fea64b-4023-4f0c-aa89-695fb909b5ff] Running
	I0730 01:29:52.957557  540360 system_pods.go:61] "etcd-test-preload-637752" [4503a8fb-e1fe-40b7-a313-f124430ee8c7] Running
	I0730 01:29:52.957563  540360 system_pods.go:61] "kube-apiserver-test-preload-637752" [c11b42ee-8329-4d66-8326-d2b564c9fa04] Running
	I0730 01:29:52.957568  540360 system_pods.go:61] "kube-controller-manager-test-preload-637752" [13e893c6-619c-4dab-afa4-73384f7923fa] Running
	I0730 01:29:52.957572  540360 system_pods.go:61] "kube-proxy-7gpc8" [5879398a-a6fa-4ffd-93b7-4be06a194738] Running
	I0730 01:29:52.957577  540360 system_pods.go:61] "kube-scheduler-test-preload-637752" [e6d70dd8-021b-4964-8b03-5f8342df7381] Running
	I0730 01:29:52.957581  540360 system_pods.go:61] "storage-provisioner" [4a39557e-ad6d-426a-8452-d67e5c1f31a8] Running
	I0730 01:29:52.957588  540360 system_pods.go:74] duration metric: took 179.842269ms to wait for pod list to return data ...
	I0730 01:29:52.957597  540360 default_sa.go:34] waiting for default service account to be created ...
	I0730 01:29:53.155694  540360 default_sa.go:45] found service account: "default"
	I0730 01:29:53.155723  540360 default_sa.go:55] duration metric: took 198.117718ms for default service account to be created ...
	I0730 01:29:53.155731  540360 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 01:29:53.357292  540360 system_pods.go:86] 7 kube-system pods found
	I0730 01:29:53.357325  540360 system_pods.go:89] "coredns-6d4b75cb6d-bv6tr" [04fea64b-4023-4f0c-aa89-695fb909b5ff] Running
	I0730 01:29:53.357332  540360 system_pods.go:89] "etcd-test-preload-637752" [4503a8fb-e1fe-40b7-a313-f124430ee8c7] Running
	I0730 01:29:53.357338  540360 system_pods.go:89] "kube-apiserver-test-preload-637752" [c11b42ee-8329-4d66-8326-d2b564c9fa04] Running
	I0730 01:29:53.357357  540360 system_pods.go:89] "kube-controller-manager-test-preload-637752" [13e893c6-619c-4dab-afa4-73384f7923fa] Running
	I0730 01:29:53.357362  540360 system_pods.go:89] "kube-proxy-7gpc8" [5879398a-a6fa-4ffd-93b7-4be06a194738] Running
	I0730 01:29:53.357368  540360 system_pods.go:89] "kube-scheduler-test-preload-637752" [e6d70dd8-021b-4964-8b03-5f8342df7381] Running
	I0730 01:29:53.357373  540360 system_pods.go:89] "storage-provisioner" [4a39557e-ad6d-426a-8452-d67e5c1f31a8] Running
	I0730 01:29:53.357382  540360 system_pods.go:126] duration metric: took 201.643747ms to wait for k8s-apps to be running ...
	I0730 01:29:53.357395  540360 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 01:29:53.357454  540360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 01:29:53.371492  540360 system_svc.go:56] duration metric: took 14.086384ms WaitForService to wait for kubelet
	I0730 01:29:53.371553  540360 kubeadm.go:582] duration metric: took 9.102106752s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 01:29:53.371581  540360 node_conditions.go:102] verifying NodePressure condition ...
	I0730 01:29:53.556492  540360 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0730 01:29:53.556517  540360 node_conditions.go:123] node cpu capacity is 2
	I0730 01:29:53.556531  540360 node_conditions.go:105] duration metric: took 184.944069ms to run NodePressure ...
	I0730 01:29:53.556547  540360 start.go:241] waiting for startup goroutines ...
	I0730 01:29:53.556562  540360 start.go:246] waiting for cluster config update ...
	I0730 01:29:53.556580  540360 start.go:255] writing updated cluster config ...
	I0730 01:29:53.556884  540360 ssh_runner.go:195] Run: rm -f paused
	I0730 01:29:53.605302  540360 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0730 01:29:53.607309  540360 out.go:177] 
	W0730 01:29:53.608668  540360 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0730 01:29:53.609959  540360 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0730 01:29:53.611144  540360 out.go:177] * Done! kubectl is now configured to use "test-preload-637752" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.621533263Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4300fa7b01dfabd224ab669729bb089730c8e2e391a50e97c9059302a2627fb2,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-bv6tr,Uid:04fea64b-4023-4f0c-aa89-695fb909b5ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722302990435615840,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-bv6tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04fea64b-4023-4f0c-aa89-695fb909b5ff,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:29:42.423031495Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c72de0203097d1d45f580c8aafa5029710125246573bf58b732a347bf57f5e89,Metadata:&PodSandboxMetadata{Name:kube-proxy-7gpc8,Uid:5879398a-a6fa-4ffd-93b7-4be06a194738,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1722302983337429933,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7gpc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5879398a-a6fa-4ffd-93b7-4be06a194738,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:29:42.423053396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4a39557e-ad6d-426a-8452-d67e5c1f31a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722302983335313247,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a39557e-ad6d-426a-8452-d67e
5c1f31a8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-30T01:29:42.423055055Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c24802430a201bfcccb79dddf5a2c1d56c77c6e5d8cc356a79f18ba96f1c6f8e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-637752,Uid:2e9b109
903998134c215aae632cd0991,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722302976960577499,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9b109903998134c215aae632cd0991,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.146:8443,kubernetes.io/config.hash: 2e9b109903998134c215aae632cd0991,kubernetes.io/config.seen: 2024-07-30T01:29:36.425107162Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:49dd355ac90b55e679b21b443bb1d27057ac4e56165fe85c5e150e614993c55e,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-637752,Uid:554f49a508920cbba12c054f97566d4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722302976959782296,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-
test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554f49a508920cbba12c054f97566d4c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.146:2379,kubernetes.io/config.hash: 554f49a508920cbba12c054f97566d4c,kubernetes.io/config.seen: 2024-07-30T01:29:36.474453748Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09165fef4acd56e9eb40d248ff37356e62087e46a3259e1244c80a03315163fb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-637752,Uid:9d6f1c52c3fc04ae7034d60ebe1c25d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722302976956428203,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6f1c52c3fc04ae7034d60ebe1c25d9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/c
onfig.hash: 9d6f1c52c3fc04ae7034d60ebe1c25d9,kubernetes.io/config.seen: 2024-07-30T01:29:36.425078361Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b21a9a1fbd3b77d84c28f49d57967b36cc257a8afe63f3e1bb6314cf5d50fa5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-637752,Uid:5b0497c56efe1327bb78d84465bfd045,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722302976952647216,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0497c56efe1327bb78d84465bfd045,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5b0497c56efe1327bb78d84465bfd045,kubernetes.io/config.seen: 2024-07-30T01:29:36.425105963Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4cbd42c3-77db-4cdb-923e-ac9ac18fb1bb name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.622471545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c9b85a7-c73a-4ff7-83ea-70cb19000da1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.622539591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c9b85a7-c73a-4ff7-83ea-70cb19000da1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.622716084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c32294e85216c8059311965fae38f28dc84e906ab4f40d0dfb25961626ece4e,PodSandboxId:4300fa7b01dfabd224ab669729bb089730c8e2e391a50e97c9059302a2627fb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722302990639336990,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bv6tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04fea64b-4023-4f0c-aa89-695fb909b5ff,},Annotations:map[string]string{io.kubernetes.container.hash: 65d0b59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d5ef68381e3c870479b26b0839128977c313afeb6f04f15230f4be7c1d7fc1,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302984608138094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 4a39557e-ad6d-426a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f2d2cb48a9604108b711f9ac12a1b996c41a2639be1559a9f0fe4f11aeb9397,PodSandboxId:c72de0203097d1d45f580c8aafa5029710125246573bf58b732a347bf57f5e89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722302983453963347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gpc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587
9398a-a6fa-4ffd-93b7-4be06a194738,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7651fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5e62f3ed267421757d7b2a1cca3b932df39538f2a6a4e87bc1cef3fef2667,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722302983450309214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a39557e-ad6d-42
6a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f55f9b1644f6daf5d1e5be8bf28f7c4200f868a05170ee9377f7190e4366a43,PodSandboxId:49dd355ac90b55e679b21b443bb1d27057ac4e56165fe85c5e150e614993c55e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722302977213791937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554f49a508920cbba12c054f97566d4c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 57a610bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5b84cfb146586dd65cf99817c5db7b778abe7b3ffc9bc78760f40d8b3142cb,PodSandboxId:c24802430a201bfcccb79dddf5a2c1d56c77c6e5d8cc356a79f18ba96f1c6f8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722302977197308508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9b109903998134c215aae632cd0991,},Annotations:map[string]string
{io.kubernetes.container.hash: f34b2bdf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb794ada12cbe617b2fb0426010298ec98a7e4f008e347e148bf1dfc401fcaa3,PodSandboxId:5b21a9a1fbd3b77d84c28f49d57967b36cc257a8afe63f3e1bb6314cf5d50fa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722302977136239889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0497c56efe1327bb78d84465bfd045,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b1b1552a2944376ebac65ed8385cad820f4f146495acdde65a504d91c25469,PodSandboxId:09165fef4acd56e9eb40d248ff37356e62087e46a3259e1244c80a03315163fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722302977112325194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6f1c52c3fc04ae7034d60ebe1c25d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c9b85a7-c73a-4ff7-83ea-70cb19000da1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.633148524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6df4be42-123d-4c43-ab07-a73b8db1bdf0 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.633229315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6df4be42-123d-4c43-ab07-a73b8db1bdf0 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.634326034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=886d38aa-55c2-4742-9bfb-3467bb5dbe54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.634756027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302994634735261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=886d38aa-55c2-4742-9bfb-3467bb5dbe54 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.635190803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11860dd9-0977-4514-b06e-c70e011e602c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.635259986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11860dd9-0977-4514-b06e-c70e011e602c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.635424065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c32294e85216c8059311965fae38f28dc84e906ab4f40d0dfb25961626ece4e,PodSandboxId:4300fa7b01dfabd224ab669729bb089730c8e2e391a50e97c9059302a2627fb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722302990639336990,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bv6tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04fea64b-4023-4f0c-aa89-695fb909b5ff,},Annotations:map[string]string{io.kubernetes.container.hash: 65d0b59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d5ef68381e3c870479b26b0839128977c313afeb6f04f15230f4be7c1d7fc1,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302984608138094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 4a39557e-ad6d-426a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f2d2cb48a9604108b711f9ac12a1b996c41a2639be1559a9f0fe4f11aeb9397,PodSandboxId:c72de0203097d1d45f580c8aafa5029710125246573bf58b732a347bf57f5e89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722302983453963347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gpc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587
9398a-a6fa-4ffd-93b7-4be06a194738,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7651fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5e62f3ed267421757d7b2a1cca3b932df39538f2a6a4e87bc1cef3fef2667,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722302983450309214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a39557e-ad6d-42
6a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f55f9b1644f6daf5d1e5be8bf28f7c4200f868a05170ee9377f7190e4366a43,PodSandboxId:49dd355ac90b55e679b21b443bb1d27057ac4e56165fe85c5e150e614993c55e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722302977213791937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554f49a508920cbba12c054f97566d4c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 57a610bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5b84cfb146586dd65cf99817c5db7b778abe7b3ffc9bc78760f40d8b3142cb,PodSandboxId:c24802430a201bfcccb79dddf5a2c1d56c77c6e5d8cc356a79f18ba96f1c6f8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722302977197308508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9b109903998134c215aae632cd0991,},Annotations:map[string]string
{io.kubernetes.container.hash: f34b2bdf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb794ada12cbe617b2fb0426010298ec98a7e4f008e347e148bf1dfc401fcaa3,PodSandboxId:5b21a9a1fbd3b77d84c28f49d57967b36cc257a8afe63f3e1bb6314cf5d50fa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722302977136239889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0497c56efe1327bb78d84465bfd045,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b1b1552a2944376ebac65ed8385cad820f4f146495acdde65a504d91c25469,PodSandboxId:09165fef4acd56e9eb40d248ff37356e62087e46a3259e1244c80a03315163fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722302977112325194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6f1c52c3fc04ae7034d60ebe1c25d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11860dd9-0977-4514-b06e-c70e011e602c name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.674018893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf205b58-7811-49bf-ba3e-353b74366fa7 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.674095371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf205b58-7811-49bf-ba3e-353b74366fa7 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.675093514Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=327a3733-0e71-4bc3-887c-a9d49b745724 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.675992345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302994675968265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=327a3733-0e71-4bc3-887c-a9d49b745724 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.676512319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0c601aa-1bf5-4ce2-92c0-b5f2b8ade2df name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.676570161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0c601aa-1bf5-4ce2-92c0-b5f2b8ade2df name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.676746443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c32294e85216c8059311965fae38f28dc84e906ab4f40d0dfb25961626ece4e,PodSandboxId:4300fa7b01dfabd224ab669729bb089730c8e2e391a50e97c9059302a2627fb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722302990639336990,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bv6tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04fea64b-4023-4f0c-aa89-695fb909b5ff,},Annotations:map[string]string{io.kubernetes.container.hash: 65d0b59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d5ef68381e3c870479b26b0839128977c313afeb6f04f15230f4be7c1d7fc1,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302984608138094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 4a39557e-ad6d-426a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f2d2cb48a9604108b711f9ac12a1b996c41a2639be1559a9f0fe4f11aeb9397,PodSandboxId:c72de0203097d1d45f580c8aafa5029710125246573bf58b732a347bf57f5e89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722302983453963347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gpc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587
9398a-a6fa-4ffd-93b7-4be06a194738,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7651fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5e62f3ed267421757d7b2a1cca3b932df39538f2a6a4e87bc1cef3fef2667,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722302983450309214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a39557e-ad6d-42
6a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f55f9b1644f6daf5d1e5be8bf28f7c4200f868a05170ee9377f7190e4366a43,PodSandboxId:49dd355ac90b55e679b21b443bb1d27057ac4e56165fe85c5e150e614993c55e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722302977213791937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554f49a508920cbba12c054f97566d4c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 57a610bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5b84cfb146586dd65cf99817c5db7b778abe7b3ffc9bc78760f40d8b3142cb,PodSandboxId:c24802430a201bfcccb79dddf5a2c1d56c77c6e5d8cc356a79f18ba96f1c6f8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722302977197308508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9b109903998134c215aae632cd0991,},Annotations:map[string]string
{io.kubernetes.container.hash: f34b2bdf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb794ada12cbe617b2fb0426010298ec98a7e4f008e347e148bf1dfc401fcaa3,PodSandboxId:5b21a9a1fbd3b77d84c28f49d57967b36cc257a8afe63f3e1bb6314cf5d50fa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722302977136239889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0497c56efe1327bb78d84465bfd045,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b1b1552a2944376ebac65ed8385cad820f4f146495acdde65a504d91c25469,PodSandboxId:09165fef4acd56e9eb40d248ff37356e62087e46a3259e1244c80a03315163fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722302977112325194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6f1c52c3fc04ae7034d60ebe1c25d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0c601aa-1bf5-4ce2-92c0-b5f2b8ade2df name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.719714449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b94c0a2d-b8cb-47ac-a1ac-0b21e9011f66 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.719825066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b94c0a2d-b8cb-47ac-a1ac-0b21e9011f66 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.721189805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9eae3b6c-1877-4594-8b59-2d74c702118e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.721620148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722302994721598677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eae3b6c-1877-4594-8b59-2d74c702118e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.722113396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a85f1ee1-f2f4-40d7-bcb5-a622af9721fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.722217923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a85f1ee1-f2f4-40d7-bcb5-a622af9721fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:29:54 test-preload-637752 crio[679]: time="2024-07-30 01:29:54.722382123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c32294e85216c8059311965fae38f28dc84e906ab4f40d0dfb25961626ece4e,PodSandboxId:4300fa7b01dfabd224ab669729bb089730c8e2e391a50e97c9059302a2627fb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722302990639336990,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bv6tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04fea64b-4023-4f0c-aa89-695fb909b5ff,},Annotations:map[string]string{io.kubernetes.container.hash: 65d0b59,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d5ef68381e3c870479b26b0839128977c313afeb6f04f15230f4be7c1d7fc1,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722302984608138094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 4a39557e-ad6d-426a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f2d2cb48a9604108b711f9ac12a1b996c41a2639be1559a9f0fe4f11aeb9397,PodSandboxId:c72de0203097d1d45f580c8aafa5029710125246573bf58b732a347bf57f5e89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722302983453963347,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7gpc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587
9398a-a6fa-4ffd-93b7-4be06a194738,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7651fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aca5e62f3ed267421757d7b2a1cca3b932df39538f2a6a4e87bc1cef3fef2667,PodSandboxId:588ce4c8fee6892ad9dfcf34cb6078b3434a87151ff9b4006f66ebe45d749e3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722302983450309214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a39557e-ad6d-42
6a-8452-d67e5c1f31a8,},Annotations:map[string]string{io.kubernetes.container.hash: d4fc46a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f55f9b1644f6daf5d1e5be8bf28f7c4200f868a05170ee9377f7190e4366a43,PodSandboxId:49dd355ac90b55e679b21b443bb1d27057ac4e56165fe85c5e150e614993c55e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722302977213791937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 554f49a508920cbba12c054f97566d4c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 57a610bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5b84cfb146586dd65cf99817c5db7b778abe7b3ffc9bc78760f40d8b3142cb,PodSandboxId:c24802430a201bfcccb79dddf5a2c1d56c77c6e5d8cc356a79f18ba96f1c6f8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722302977197308508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e9b109903998134c215aae632cd0991,},Annotations:map[string]string
{io.kubernetes.container.hash: f34b2bdf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb794ada12cbe617b2fb0426010298ec98a7e4f008e347e148bf1dfc401fcaa3,PodSandboxId:5b21a9a1fbd3b77d84c28f49d57967b36cc257a8afe63f3e1bb6314cf5d50fa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722302977136239889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0497c56efe1327bb78d84465bfd045,},Annotations:map[string]string{io.kuberne
tes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28b1b1552a2944376ebac65ed8385cad820f4f146495acdde65a504d91c25469,PodSandboxId:09165fef4acd56e9eb40d248ff37356e62087e46a3259e1244c80a03315163fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722302977112325194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-637752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6f1c52c3fc04ae7034d60ebe1c25d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a85f1ee1-f2f4-40d7-bcb5-a622af9721fd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3c32294e85216       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   4300fa7b01dfa       coredns-6d4b75cb6d-bv6tr
	55d5ef68381e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 seconds ago      Running             storage-provisioner       2                   588ce4c8fee68       storage-provisioner
	8f2d2cb48a960       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   c72de0203097d       kube-proxy-7gpc8
	aca5e62f3ed26       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       1                   588ce4c8fee68       storage-provisioner
	2f55f9b1644f6       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   49dd355ac90b5       etcd-test-preload-637752
	ae5b84cfb1465       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   c24802430a201       kube-apiserver-test-preload-637752
	eb794ada12cbe       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   5b21a9a1fbd3b       kube-scheduler-test-preload-637752
	28b1b1552a294       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   09165fef4acd5       kube-controller-manager-test-preload-637752
	
	
	==> coredns [3c32294e85216c8059311965fae38f28dc84e906ab4f40d0dfb25961626ece4e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:48072 - 30365 "HINFO IN 3869689930320131301.4166291535085358928. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015718382s
	
	
	==> describe nodes <==
	Name:               test-preload-637752
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-637752
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260fb3b3c668416d4de4f98d706728fbce690500
	                    minikube.k8s.io/name=test-preload-637752
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T01_28_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 01:28:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-637752
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:29:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:29:51 +0000   Tue, 30 Jul 2024 01:28:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:29:51 +0000   Tue, 30 Jul 2024 01:28:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:29:51 +0000   Tue, 30 Jul 2024 01:28:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:29:51 +0000   Tue, 30 Jul 2024 01:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    test-preload-637752
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ebab9ad393149dabc29cc8d7222fff5
	  System UUID:                7ebab9ad-3931-49da-bc29-cc8d7222fff5
	  Boot ID:                    19ef3226-221b-4460-9d5e-2da60b70c615
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bv6tr                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-test-preload-637752                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kube-apiserver-test-preload-637752             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-test-preload-637752    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-7gpc8                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-test-preload-637752             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11s                  kube-proxy       
	  Normal  Starting                 79s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  100s (x4 over 100s)  kubelet          Node test-preload-637752 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x4 over 100s)  kubelet          Node test-preload-637752 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x4 over 100s)  kubelet          Node test-preload-637752 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  93s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                  kubelet          Node test-preload-637752 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                  kubelet          Node test-preload-637752 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                  kubelet          Node test-preload-637752 status is now: NodeHasSufficientPID
	  Normal  NodeReady                82s                  kubelet          Node test-preload-637752 status is now: NodeReady
	  Normal  RegisteredNode           81s                  node-controller  Node test-preload-637752 event: Registered Node test-preload-637752 in Controller
	  Normal  Starting                 18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)    kubelet          Node test-preload-637752 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)    kubelet          Node test-preload-637752 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)    kubelet          Node test-preload-637752 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                   node-controller  Node test-preload-637752 event: Registered Node test-preload-637752 in Controller
	
	
	==> dmesg <==
	[Jul30 01:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051474] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039002] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.707654] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.885662] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.547553] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.645962] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059189] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057425] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.161956] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.133674] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.247684] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[ +12.618701] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +0.061378] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.774233] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +6.568768] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.535445] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +6.121660] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [2f55f9b1644f6daf5d1e5be8bf28f7c4200f868a05170ee9377f7190e4366a43] <==
	{"level":"info","ts":"2024-07-30T01:29:37.609Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"fc85001aa37e7974","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-30T01:29:37.619Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-30T01:29:37.620Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fc85001aa37e7974","initial-advertise-peer-urls":["https://192.168.39.146:2380"],"listen-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.146:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-30T01:29:37.620Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-30T01:29:37.620Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 switched to configuration voters=(18195949983872481652)"}
	{"level":"info","ts":"2024-07-30T01:29:37.620Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-30T01:29:37.622Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-30T01:29:37.622Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","added-peer-id":"fc85001aa37e7974","added-peer-peer-urls":["https://192.168.39.146:2380"]}
	{"level":"info","ts":"2024-07-30T01:29:37.622Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T01:29:37.622Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T01:29:37.624Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-30T01:29:39.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-30T01:29:39.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-30T01:29:39.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgPreVoteResp from fc85001aa37e7974 at term 2"}
	{"level":"info","ts":"2024-07-30T01:29:39.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became candidate at term 3"}
	{"level":"info","ts":"2024-07-30T01:29:39.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 received MsgVoteResp from fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-07-30T01:29:39.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 became leader at term 3"}
	{"level":"info","ts":"2024-07-30T01:29:39.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fc85001aa37e7974 elected leader fc85001aa37e7974 at term 3"}
	{"level":"info","ts":"2024-07-30T01:29:39.277Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"fc85001aa37e7974","local-member-attributes":"{Name:test-preload-637752 ClientURLs:[https://192.168.39.146:2379]}","request-path":"/0/members/fc85001aa37e7974/attributes","cluster-id":"25c4f0770a3181de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T01:29:39.277Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:29:39.278Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T01:29:39.278Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T01:29:39.278Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:29:39.278Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-30T01:29:39.279Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.146:2379"}
	
	
	==> kernel <==
	 01:29:55 up 0 min,  0 users,  load average: 0.63, 0.17, 0.06
	Linux test-preload-637752 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae5b84cfb146586dd65cf99817c5db7b778abe7b3ffc9bc78760f40d8b3142cb] <==
	I0730 01:29:41.595676       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0730 01:29:41.596042       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0730 01:29:41.646137       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0730 01:29:41.628490       1 controller.go:85] Starting OpenAPI controller
	I0730 01:29:41.628515       1 controller.go:85] Starting OpenAPI V3 controller
	I0730 01:29:41.628546       1 naming_controller.go:291] Starting NamingConditionController
	E0730 01:29:41.725788       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0730 01:29:41.739263       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0730 01:29:41.746180       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0730 01:29:41.785394       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0730 01:29:41.785432       1 cache.go:39] Caches are synced for autoregister controller
	I0730 01:29:41.785653       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 01:29:41.791060       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 01:29:41.795581       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0730 01:29:41.813334       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0730 01:29:42.248770       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0730 01:29:42.590119       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0730 01:29:43.137790       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0730 01:29:43.149996       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0730 01:29:43.181156       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0730 01:29:43.202336       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0730 01:29:43.208760       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0730 01:29:43.737579       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0730 01:29:54.784692       1 controller.go:611] quota admission added evaluator for: endpoints
	I0730 01:29:54.834631       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [28b1b1552a2944376ebac65ed8385cad820f4f146495acdde65a504d91c25469] <==
	I0730 01:29:54.680989       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0730 01:29:54.683484       1 shared_informer.go:262] Caches are synced for namespace
	I0730 01:29:54.694411       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0730 01:29:54.695743       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0730 01:29:54.697986       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0730 01:29:54.698051       1 shared_informer.go:262] Caches are synced for taint
	I0730 01:29:54.698132       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0730 01:29:54.698321       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0730 01:29:54.698434       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0730 01:29:54.698652       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-637752. Assuming now as a timestamp.
	I0730 01:29:54.698726       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0730 01:29:54.698934       1 event.go:294] "Event occurred" object="test-preload-637752" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-637752 event: Registered Node test-preload-637752 in Controller"
	I0730 01:29:54.700968       1 shared_informer.go:262] Caches are synced for expand
	I0730 01:29:54.701773       1 shared_informer.go:262] Caches are synced for endpoint
	I0730 01:29:54.705029       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0730 01:29:54.706329       1 shared_informer.go:262] Caches are synced for job
	I0730 01:29:54.715423       1 shared_informer.go:262] Caches are synced for disruption
	I0730 01:29:54.715487       1 disruption.go:371] Sending events to api server.
	I0730 01:29:54.729051       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0730 01:29:54.746532       1 shared_informer.go:262] Caches are synced for HPA
	I0730 01:29:54.770274       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0730 01:29:54.836693       1 shared_informer.go:262] Caches are synced for resource quota
	I0730 01:29:54.846130       1 shared_informer.go:262] Caches are synced for stateful set
	I0730 01:29:54.858235       1 shared_informer.go:262] Caches are synced for resource quota
	I0730 01:29:54.883771       1 shared_informer.go:262] Caches are synced for daemon sets
	
	
	==> kube-proxy [8f2d2cb48a9604108b711f9ac12a1b996c41a2639be1559a9f0fe4f11aeb9397] <==
	I0730 01:29:43.694240       1 node.go:163] Successfully retrieved node IP: 192.168.39.146
	I0730 01:29:43.694469       1 server_others.go:138] "Detected node IP" address="192.168.39.146"
	I0730 01:29:43.694568       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0730 01:29:43.725779       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0730 01:29:43.725796       1 server_others.go:206] "Using iptables Proxier"
	I0730 01:29:43.726414       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0730 01:29:43.727160       1 server.go:661] "Version info" version="v1.24.4"
	I0730 01:29:43.727172       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:29:43.728627       1 config.go:317] "Starting service config controller"
	I0730 01:29:43.728918       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0730 01:29:43.729069       1 config.go:226] "Starting endpoint slice config controller"
	I0730 01:29:43.729142       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0730 01:29:43.732628       1 config.go:444] "Starting node config controller"
	I0730 01:29:43.732653       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0730 01:29:43.829397       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0730 01:29:43.829447       1 shared_informer.go:262] Caches are synced for service config
	I0730 01:29:43.839226       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [eb794ada12cbe617b2fb0426010298ec98a7e4f008e347e148bf1dfc401fcaa3] <==
	I0730 01:29:37.840527       1 serving.go:348] Generated self-signed cert in-memory
	W0730 01:29:41.650918       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0730 01:29:41.651148       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 01:29:41.651244       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0730 01:29:41.651272       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0730 01:29:41.728599       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0730 01:29:41.729012       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:29:41.732502       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0730 01:29:41.732603       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0730 01:29:41.732506       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0730 01:29:41.732533       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0730 01:29:41.833572       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.418462    1073 apiserver.go:52] "Watching apiserver"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.423214    1073 topology_manager.go:200] "Topology Admit Handler"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.423350    1073 topology_manager.go:200] "Topology Admit Handler"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.423432    1073 topology_manager.go:200] "Topology Admit Handler"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: E0730 01:29:42.426270    1073 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-bv6tr" podUID=04fea64b-4023-4f0c-aa89-695fb909b5ff
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.488166    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvwk\" (UniqueName: \"kubernetes.io/projected/4a39557e-ad6d-426a-8452-d67e5c1f31a8-kube-api-access-frvwk\") pod \"storage-provisioner\" (UID: \"4a39557e-ad6d-426a-8452-d67e5c1f31a8\") " pod="kube-system/storage-provisioner"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.488668    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5879398a-a6fa-4ffd-93b7-4be06a194738-kube-proxy\") pod \"kube-proxy-7gpc8\" (UID: \"5879398a-a6fa-4ffd-93b7-4be06a194738\") " pod="kube-system/kube-proxy-7gpc8"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.488839    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8md\" (UniqueName: \"kubernetes.io/projected/5879398a-a6fa-4ffd-93b7-4be06a194738-kube-api-access-gd8md\") pod \"kube-proxy-7gpc8\" (UID: \"5879398a-a6fa-4ffd-93b7-4be06a194738\") " pod="kube-system/kube-proxy-7gpc8"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.488947    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume\") pod \"coredns-6d4b75cb6d-bv6tr\" (UID: \"04fea64b-4023-4f0c-aa89-695fb909b5ff\") " pod="kube-system/coredns-6d4b75cb6d-bv6tr"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.489019    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtbs\" (UniqueName: \"kubernetes.io/projected/04fea64b-4023-4f0c-aa89-695fb909b5ff-kube-api-access-ldtbs\") pod \"coredns-6d4b75cb6d-bv6tr\" (UID: \"04fea64b-4023-4f0c-aa89-695fb909b5ff\") " pod="kube-system/coredns-6d4b75cb6d-bv6tr"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.489120    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5879398a-a6fa-4ffd-93b7-4be06a194738-xtables-lock\") pod \"kube-proxy-7gpc8\" (UID: \"5879398a-a6fa-4ffd-93b7-4be06a194738\") " pod="kube-system/kube-proxy-7gpc8"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.489183    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5879398a-a6fa-4ffd-93b7-4be06a194738-lib-modules\") pod \"kube-proxy-7gpc8\" (UID: \"5879398a-a6fa-4ffd-93b7-4be06a194738\") " pod="kube-system/kube-proxy-7gpc8"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.489245    1073 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4a39557e-ad6d-426a-8452-d67e5c1f31a8-tmp\") pod \"storage-provisioner\" (UID: \"4a39557e-ad6d-426a-8452-d67e5c1f31a8\") " pod="kube-system/storage-provisioner"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.489322    1073 reconciler.go:159] "Reconciler: start to sync state"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: I0730 01:29:42.528751    1073 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c770915a-ddaf-4051-ad8b-50d0e45ca81d path="/var/lib/kubelet/pods/c770915a-ddaf-4051-ad8b-50d0e45ca81d/volumes"
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: E0730 01:29:42.592955    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 30 01:29:42 test-preload-637752 kubelet[1073]: E0730 01:29:42.593167    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume podName:04fea64b-4023-4f0c-aa89-695fb909b5ff nodeName:}" failed. No retries permitted until 2024-07-30 01:29:43.093125743 +0000 UTC m=+6.785643183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume") pod "coredns-6d4b75cb6d-bv6tr" (UID: "04fea64b-4023-4f0c-aa89-695fb909b5ff") : object "kube-system"/"coredns" not registered
	Jul 30 01:29:43 test-preload-637752 kubelet[1073]: E0730 01:29:43.096454    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 30 01:29:43 test-preload-637752 kubelet[1073]: E0730 01:29:43.096567    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume podName:04fea64b-4023-4f0c-aa89-695fb909b5ff nodeName:}" failed. No retries permitted until 2024-07-30 01:29:44.096541981 +0000 UTC m=+7.789059430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume") pod "coredns-6d4b75cb6d-bv6tr" (UID: "04fea64b-4023-4f0c-aa89-695fb909b5ff") : object "kube-system"/"coredns" not registered
	Jul 30 01:29:44 test-preload-637752 kubelet[1073]: E0730 01:29:44.104275    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 30 01:29:44 test-preload-637752 kubelet[1073]: E0730 01:29:44.104383    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume podName:04fea64b-4023-4f0c-aa89-695fb909b5ff nodeName:}" failed. No retries permitted until 2024-07-30 01:29:46.104366194 +0000 UTC m=+9.796883635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume") pod "coredns-6d4b75cb6d-bv6tr" (UID: "04fea64b-4023-4f0c-aa89-695fb909b5ff") : object "kube-system"/"coredns" not registered
	Jul 30 01:29:44 test-preload-637752 kubelet[1073]: E0730 01:29:44.522223    1073 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-bv6tr" podUID=04fea64b-4023-4f0c-aa89-695fb909b5ff
	Jul 30 01:29:44 test-preload-637752 kubelet[1073]: I0730 01:29:44.585278    1073 scope.go:110] "RemoveContainer" containerID="aca5e62f3ed267421757d7b2a1cca3b932df39538f2a6a4e87bc1cef3fef2667"
	Jul 30 01:29:46 test-preload-637752 kubelet[1073]: E0730 01:29:46.127129    1073 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 30 01:29:46 test-preload-637752 kubelet[1073]: E0730 01:29:46.127219    1073 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume podName:04fea64b-4023-4f0c-aa89-695fb909b5ff nodeName:}" failed. No retries permitted until 2024-07-30 01:29:50.127204414 +0000 UTC m=+13.819721855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/04fea64b-4023-4f0c-aa89-695fb909b5ff-config-volume") pod "coredns-6d4b75cb6d-bv6tr" (UID: "04fea64b-4023-4f0c-aa89-695fb909b5ff") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [55d5ef68381e3c870479b26b0839128977c313afeb6f04f15230f4be7c1d7fc1] <==
	I0730 01:29:44.743969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 01:29:44.763257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 01:29:44.763319       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [aca5e62f3ed267421757d7b2a1cca3b932df39538f2a6a4e87bc1cef3fef2667] <==
	I0730 01:29:43.557339       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0730 01:29:43.578188       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-637752 -n test-preload-637752
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-637752 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-637752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-637752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-637752: (1.150332067s)
--- FAIL: TestPreload (240.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (449.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m56.102415352s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-599146] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19346
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-599146" primary control-plane node in "kubernetes-upgrade-599146" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 01:31:50.550699  541833 out.go:291] Setting OutFile to fd 1 ...
	I0730 01:31:50.550826  541833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:31:50.550836  541833 out.go:304] Setting ErrFile to fd 2...
	I0730 01:31:50.550842  541833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:31:50.551036  541833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 01:31:50.551790  541833 out.go:298] Setting JSON to false
	I0730 01:31:50.552952  541833 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11653,"bootTime":1722291458,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 01:31:50.553029  541833 start.go:139] virtualization: kvm guest
	I0730 01:31:50.554241  541833 out.go:177] * [kubernetes-upgrade-599146] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 01:31:50.555780  541833 notify.go:220] Checking for updates...
	I0730 01:31:50.556889  541833 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 01:31:50.559298  541833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 01:31:50.562226  541833 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 01:31:50.563421  541833 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:31:50.565110  541833 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 01:31:50.567708  541833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 01:31:50.569404  541833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 01:31:50.612211  541833 out.go:177] * Using the kvm2 driver based on user configuration
	I0730 01:31:50.613348  541833 start.go:297] selected driver: kvm2
	I0730 01:31:50.613366  541833 start.go:901] validating driver "kvm2" against <nil>
	I0730 01:31:50.613377  541833 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 01:31:50.614322  541833 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:31:50.628480  541833 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 01:31:50.648448  541833 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 01:31:50.648502  541833 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 01:31:50.648688  541833 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0730 01:31:50.648761  541833 cni.go:84] Creating CNI manager for ""
	I0730 01:31:50.648779  541833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:31:50.648788  541833 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0730 01:31:50.648866  541833 start.go:340] cluster config:
	{Name:kubernetes-upgrade-599146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:31:50.648992  541833 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:31:50.650559  541833 out.go:177] * Starting "kubernetes-upgrade-599146" primary control-plane node in "kubernetes-upgrade-599146" cluster
	I0730 01:31:50.651607  541833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 01:31:50.651657  541833 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0730 01:31:50.651676  541833 cache.go:56] Caching tarball of preloaded images
	I0730 01:31:50.651773  541833 preload.go:172] Found /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0730 01:31:50.651787  541833 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0730 01:31:50.652256  541833 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/config.json ...
	I0730 01:31:50.652288  541833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/config.json: {Name:mka896e1cfdb21a77c184370ef0d149cd4b71dfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:31:50.652450  541833 start.go:360] acquireMachinesLock for kubernetes-upgrade-599146: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 01:32:17.617207  541833 start.go:364] duration metric: took 26.964705381s to acquireMachinesLock for "kubernetes-upgrade-599146"
	I0730 01:32:17.617289  541833 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-599146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 01:32:17.617431  541833 start.go:125] createHost starting for "" (driver="kvm2")
	I0730 01:32:17.619687  541833 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 01:32:17.619886  541833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:32:17.619926  541833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:32:17.636325  541833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0730 01:32:17.636802  541833 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:32:17.637376  541833 main.go:141] libmachine: Using API Version  1
	I0730 01:32:17.637398  541833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:32:17.637744  541833 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:32:17.637933  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetMachineName
	I0730 01:32:17.638102  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:17.638260  541833 start.go:159] libmachine.API.Create for "kubernetes-upgrade-599146" (driver="kvm2")
	I0730 01:32:17.638286  541833 client.go:168] LocalClient.Create starting
	I0730 01:32:17.638315  541833 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 01:32:17.638346  541833 main.go:141] libmachine: Decoding PEM data...
	I0730 01:32:17.638362  541833 main.go:141] libmachine: Parsing certificate...
	I0730 01:32:17.638417  541833 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 01:32:17.638435  541833 main.go:141] libmachine: Decoding PEM data...
	I0730 01:32:17.638446  541833 main.go:141] libmachine: Parsing certificate...
	I0730 01:32:17.638462  541833 main.go:141] libmachine: Running pre-create checks...
	I0730 01:32:17.638473  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .PreCreateCheck
	I0730 01:32:17.638783  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetConfigRaw
	I0730 01:32:17.639160  541833 main.go:141] libmachine: Creating machine...
	I0730 01:32:17.639173  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .Create
	I0730 01:32:17.639328  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Creating KVM machine...
	I0730 01:32:17.640316  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found existing default KVM network
	I0730 01:32:17.641242  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:17.641096  542188 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:05:46:72} reservation:<nil>}
	I0730 01:32:17.642015  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:17.641923  542188 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fca0}
	I0730 01:32:17.642043  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | created network xml: 
	I0730 01:32:17.642058  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | <network>
	I0730 01:32:17.642080  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |   <name>mk-kubernetes-upgrade-599146</name>
	I0730 01:32:17.642093  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |   <dns enable='no'/>
	I0730 01:32:17.642105  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |   
	I0730 01:32:17.642121  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0730 01:32:17.642137  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |     <dhcp>
	I0730 01:32:17.642168  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0730 01:32:17.642208  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |     </dhcp>
	I0730 01:32:17.642253  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |   </ip>
	I0730 01:32:17.642279  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG |   
	I0730 01:32:17.642286  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | </network>
	I0730 01:32:17.642291  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | 
	I0730 01:32:17.647981  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | trying to create private KVM network mk-kubernetes-upgrade-599146 192.168.50.0/24...
	I0730 01:32:17.723535  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | private KVM network mk-kubernetes-upgrade-599146 192.168.50.0/24 created
	I0730 01:32:17.723568  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146 ...
	I0730 01:32:17.723591  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:17.723517  542188 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:32:17.723611  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 01:32:17.723703  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 01:32:17.983262  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:17.983073  542188 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa...
	I0730 01:32:18.229546  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:18.229400  542188 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/kubernetes-upgrade-599146.rawdisk...
	I0730 01:32:18.229576  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Writing magic tar header
	I0730 01:32:18.229589  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Writing SSH key tar header
	I0730 01:32:18.229611  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:18.229532  542188 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146 ...
	I0730 01:32:18.229629  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146
	I0730 01:32:18.229653  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 01:32:18.229674  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146 (perms=drwx------)
	I0730 01:32:18.229688  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:32:18.229718  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 01:32:18.229724  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 01:32:18.229746  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Checking permissions on dir: /home/jenkins
	I0730 01:32:18.229763  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 01:32:18.229776  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Checking permissions on dir: /home
	I0730 01:32:18.229785  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Skipping /home - not owner
	I0730 01:32:18.229796  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 01:32:18.229802  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 01:32:18.229810  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 01:32:18.229818  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 01:32:18.229831  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Creating domain...
	I0730 01:32:18.230908  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) define libvirt domain using xml: 
	I0730 01:32:18.230936  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) <domain type='kvm'>
	I0730 01:32:18.230948  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   <name>kubernetes-upgrade-599146</name>
	I0730 01:32:18.230957  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   <memory unit='MiB'>2200</memory>
	I0730 01:32:18.230965  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   <vcpu>2</vcpu>
	I0730 01:32:18.230975  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   <features>
	I0730 01:32:18.230983  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <acpi/>
	I0730 01:32:18.230993  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <apic/>
	I0730 01:32:18.231010  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <pae/>
	I0730 01:32:18.231021  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     
	I0730 01:32:18.231029  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   </features>
	I0730 01:32:18.231044  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   <cpu mode='host-passthrough'>
	I0730 01:32:18.231057  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   
	I0730 01:32:18.231077  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   </cpu>
	I0730 01:32:18.231099  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   <os>
	I0730 01:32:18.231110  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <type>hvm</type>
	I0730 01:32:18.231161  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <boot dev='cdrom'/>
	I0730 01:32:18.231188  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <boot dev='hd'/>
	I0730 01:32:18.231226  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <bootmenu enable='no'/>
	I0730 01:32:18.231255  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   </os>
	I0730 01:32:18.231266  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   <devices>
	I0730 01:32:18.231275  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <disk type='file' device='cdrom'>
	I0730 01:32:18.231303  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/boot2docker.iso'/>
	I0730 01:32:18.231323  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <target dev='hdc' bus='scsi'/>
	I0730 01:32:18.231329  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <readonly/>
	I0730 01:32:18.231341  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     </disk>
	I0730 01:32:18.231349  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <disk type='file' device='disk'>
	I0730 01:32:18.231356  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 01:32:18.231368  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/kubernetes-upgrade-599146.rawdisk'/>
	I0730 01:32:18.231375  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <target dev='hda' bus='virtio'/>
	I0730 01:32:18.231381  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     </disk>
	I0730 01:32:18.231386  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <interface type='network'>
	I0730 01:32:18.231403  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <source network='mk-kubernetes-upgrade-599146'/>
	I0730 01:32:18.231420  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <model type='virtio'/>
	I0730 01:32:18.231457  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     </interface>
	I0730 01:32:18.231476  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <interface type='network'>
	I0730 01:32:18.231487  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <source network='default'/>
	I0730 01:32:18.231503  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <model type='virtio'/>
	I0730 01:32:18.231515  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     </interface>
	I0730 01:32:18.231526  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <serial type='pty'>
	I0730 01:32:18.231539  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <target port='0'/>
	I0730 01:32:18.231548  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     </serial>
	I0730 01:32:18.231555  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <console type='pty'>
	I0730 01:32:18.231565  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <target type='serial' port='0'/>
	I0730 01:32:18.231580  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     </console>
	I0730 01:32:18.231595  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     <rng model='virtio'>
	I0730 01:32:18.231608  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)       <backend model='random'>/dev/random</backend>
	I0730 01:32:18.231619  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     </rng>
	I0730 01:32:18.231629  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     
	I0730 01:32:18.231638  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)     
	I0730 01:32:18.231643  541833 main.go:141] libmachine: (kubernetes-upgrade-599146)   </devices>
	I0730 01:32:18.231653  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) </domain>
	I0730 01:32:18.231665  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) 
	I0730 01:32:18.235831  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:07:65:b3 in network default
	I0730 01:32:18.236401  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Ensuring networks are active...
	I0730 01:32:18.236422  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:18.237094  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Ensuring network default is active
	I0730 01:32:18.237435  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Ensuring network mk-kubernetes-upgrade-599146 is active
	I0730 01:32:18.237927  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Getting domain xml...
	I0730 01:32:18.238635  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Creating domain...
	I0730 01:32:19.547815  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Waiting to get IP...
	I0730 01:32:19.548865  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:19.549281  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:19.549351  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:19.549274  542188 retry.go:31] will retry after 216.877607ms: waiting for machine to come up
	I0730 01:32:19.767754  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:19.768388  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:19.768414  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:19.768343  542188 retry.go:31] will retry after 311.368564ms: waiting for machine to come up
	I0730 01:32:20.081127  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:20.081628  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:20.081682  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:20.081592  542188 retry.go:31] will retry after 298.987137ms: waiting for machine to come up
	I0730 01:32:20.381995  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:20.383136  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:20.383166  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:20.383094  542188 retry.go:31] will retry after 595.883673ms: waiting for machine to come up
	I0730 01:32:20.981123  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:20.981678  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:20.981711  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:20.981635  542188 retry.go:31] will retry after 481.298625ms: waiting for machine to come up
	I0730 01:32:21.464463  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:21.464986  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:21.465018  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:21.464929  542188 retry.go:31] will retry after 747.531563ms: waiting for machine to come up
	I0730 01:32:22.214048  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:22.214579  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:22.214612  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:22.214511  542188 retry.go:31] will retry after 976.812515ms: waiting for machine to come up
	I0730 01:32:23.192939  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:23.193385  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:23.193424  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:23.193362  542188 retry.go:31] will retry after 1.383098182s: waiting for machine to come up
	I0730 01:32:24.578110  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:24.578627  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:24.578659  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:24.578571  542188 retry.go:31] will retry after 1.672107954s: waiting for machine to come up
	I0730 01:32:26.253464  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:26.253972  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:26.254004  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:26.253905  542188 retry.go:31] will retry after 1.675260144s: waiting for machine to come up
	I0730 01:32:27.930431  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:27.930877  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:27.930906  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:27.930837  542188 retry.go:31] will retry after 2.621004671s: waiting for machine to come up
	I0730 01:32:30.554040  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:30.554396  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:30.554418  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:30.554376  542188 retry.go:31] will retry after 3.16113906s: waiting for machine to come up
	I0730 01:32:33.717591  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:33.718081  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find current IP address of domain kubernetes-upgrade-599146 in network mk-kubernetes-upgrade-599146
	I0730 01:32:33.718107  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | I0730 01:32:33.718022  542188 retry.go:31] will retry after 4.202365578s: waiting for machine to come up
	I0730 01:32:37.922933  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:37.923423  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Found IP for machine: 192.168.50.97
	I0730 01:32:37.923457  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has current primary IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:37.923466  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Reserving static IP address...
	I0730 01:32:37.923713  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-599146", mac: "52:54:00:46:c0:27", ip: "192.168.50.97"} in network mk-kubernetes-upgrade-599146
	I0730 01:32:38.083973  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Getting to WaitForSSH function...
	I0730 01:32:38.084031  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Reserved static IP address: 192.168.50.97
	I0730 01:32:38.084049  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Waiting for SSH to be available...
	I0730 01:32:38.086667  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:38.086970  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146
	I0730 01:32:38.087000  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-599146 interface with MAC address 52:54:00:46:c0:27
	I0730 01:32:38.087150  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Using SSH client type: external
	I0730 01:32:38.087185  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa (-rw-------)
	I0730 01:32:38.087228  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 01:32:38.087243  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | About to run SSH command:
	I0730 01:32:38.087259  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | exit 0
	I0730 01:32:38.090946  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | SSH cmd err, output: exit status 255: 
	I0730 01:32:38.090975  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0730 01:32:38.090991  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | command : exit 0
	I0730 01:32:38.091006  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | err     : exit status 255
	I0730 01:32:38.091017  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | output  : 
	I0730 01:32:41.093159  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Getting to WaitForSSH function...
	I0730 01:32:41.095673  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.096137  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.096172  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.096353  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Using SSH client type: external
	I0730 01:32:41.096382  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa (-rw-------)
	I0730 01:32:41.096409  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 01:32:41.096423  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | About to run SSH command:
	I0730 01:32:41.096438  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | exit 0
	I0730 01:32:41.216480  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | SSH cmd err, output: <nil>: 
	I0730 01:32:41.216739  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) KVM machine creation complete!
	I0730 01:32:41.217066  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetConfigRaw
	I0730 01:32:41.217690  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:41.217927  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:41.218119  541833 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 01:32:41.218138  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetState
	I0730 01:32:41.219383  541833 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 01:32:41.219401  541833 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 01:32:41.219410  541833 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 01:32:41.219419  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:41.221572  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.221815  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.221846  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.221923  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:41.222109  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.222411  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.222586  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:41.222751  541833 main.go:141] libmachine: Using SSH client type: native
	I0730 01:32:41.222951  541833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:32:41.222962  541833 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 01:32:41.323937  541833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:32:41.323963  541833 main.go:141] libmachine: Detecting the provisioner...
	I0730 01:32:41.323971  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:41.326709  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.327056  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.327103  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.327221  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:41.327433  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.327602  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.327773  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:41.327967  541833 main.go:141] libmachine: Using SSH client type: native
	I0730 01:32:41.328142  541833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:32:41.328154  541833 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 01:32:41.432977  541833 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 01:32:41.433063  541833 main.go:141] libmachine: found compatible host: buildroot
	I0730 01:32:41.433074  541833 main.go:141] libmachine: Provisioning with buildroot...
	I0730 01:32:41.433083  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetMachineName
	I0730 01:32:41.433345  541833 buildroot.go:166] provisioning hostname "kubernetes-upgrade-599146"
	I0730 01:32:41.433367  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetMachineName
	I0730 01:32:41.433620  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:41.436225  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.436537  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.436559  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.436690  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:41.436899  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.437150  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.437298  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:41.437459  541833 main.go:141] libmachine: Using SSH client type: native
	I0730 01:32:41.437623  541833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:32:41.437635  541833 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-599146 && echo "kubernetes-upgrade-599146" | sudo tee /etc/hostname
	I0730 01:32:41.549770  541833 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-599146
	
	I0730 01:32:41.549808  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:41.552337  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.552699  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.552751  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.552989  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:41.553227  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.553392  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.553522  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:41.553672  541833 main.go:141] libmachine: Using SSH client type: native
	I0730 01:32:41.553894  541833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:32:41.553912  541833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-599146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-599146/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-599146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 01:32:41.660509  541833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:32:41.660542  541833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 01:32:41.660567  541833 buildroot.go:174] setting up certificates
	I0730 01:32:41.660579  541833 provision.go:84] configureAuth start
	I0730 01:32:41.660591  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetMachineName
	I0730 01:32:41.660919  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetIP
	I0730 01:32:41.663602  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.663932  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.663959  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.664171  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:41.666435  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.666760  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.666791  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.666932  541833 provision.go:143] copyHostCerts
	I0730 01:32:41.667004  541833 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 01:32:41.667023  541833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:32:41.667085  541833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 01:32:41.667201  541833 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 01:32:41.667213  541833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:32:41.667242  541833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 01:32:41.667324  541833 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 01:32:41.667335  541833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:32:41.667361  541833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 01:32:41.667425  541833 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-599146 san=[127.0.0.1 192.168.50.97 kubernetes-upgrade-599146 localhost minikube]
	I0730 01:32:41.855194  541833 provision.go:177] copyRemoteCerts
	I0730 01:32:41.855255  541833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 01:32:41.855287  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:41.858493  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.859000  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:41.859044  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:41.859211  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:41.859431  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:41.859610  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:41.859775  541833 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:32:41.942590  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 01:32:41.964546  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0730 01:32:41.986262  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 01:32:42.008747  541833 provision.go:87] duration metric: took 348.148522ms to configureAuth
	I0730 01:32:42.008789  541833 buildroot.go:189] setting minikube options for container-runtime
	I0730 01:32:42.008962  541833 config.go:182] Loaded profile config "kubernetes-upgrade-599146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0730 01:32:42.009041  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:42.011704  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.012066  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.012089  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.012251  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:42.012461  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:42.012632  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:42.012808  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:42.012958  541833 main.go:141] libmachine: Using SSH client type: native
	I0730 01:32:42.013163  541833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:32:42.013180  541833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 01:32:42.261803  541833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 01:32:42.261851  541833 main.go:141] libmachine: Checking connection to Docker...
	I0730 01:32:42.261863  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetURL
	I0730 01:32:42.263180  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | Using libvirt version 6000000
	I0730 01:32:42.265317  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.265665  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.265703  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.265830  541833 main.go:141] libmachine: Docker is up and running!
	I0730 01:32:42.265843  541833 main.go:141] libmachine: Reticulating splines...
	I0730 01:32:42.265850  541833 client.go:171] duration metric: took 24.627556743s to LocalClient.Create
	I0730 01:32:42.265873  541833 start.go:167] duration metric: took 24.627615566s to libmachine.API.Create "kubernetes-upgrade-599146"
	I0730 01:32:42.265886  541833 start.go:293] postStartSetup for "kubernetes-upgrade-599146" (driver="kvm2")
	I0730 01:32:42.265900  541833 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 01:32:42.265932  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:42.266154  541833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 01:32:42.266179  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:42.268170  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.268466  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.268490  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.268634  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:42.268833  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:42.268977  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:42.269079  541833 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:32:42.352980  541833 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 01:32:42.358090  541833 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 01:32:42.358116  541833 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 01:32:42.358198  541833 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 01:32:42.358269  541833 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 01:32:42.358382  541833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 01:32:42.369084  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:32:42.395606  541833 start.go:296] duration metric: took 129.690486ms for postStartSetup
	I0730 01:32:42.395672  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetConfigRaw
	I0730 01:32:42.396395  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetIP
	I0730 01:32:42.399290  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.399668  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.399700  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.399897  541833 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/config.json ...
	I0730 01:32:42.400094  541833 start.go:128] duration metric: took 24.78265028s to createHost
	I0730 01:32:42.400119  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:42.402599  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.402929  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.402955  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.403155  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:42.403382  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:42.403577  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:42.403763  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:42.403940  541833 main.go:141] libmachine: Using SSH client type: native
	I0730 01:32:42.404161  541833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:32:42.404176  541833 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0730 01:32:42.509303  541833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722303162.485147475
	
	I0730 01:32:42.509334  541833 fix.go:216] guest clock: 1722303162.485147475
	I0730 01:32:42.509343  541833 fix.go:229] Guest: 2024-07-30 01:32:42.485147475 +0000 UTC Remote: 2024-07-30 01:32:42.40010673 +0000 UTC m=+51.898919904 (delta=85.040745ms)
	I0730 01:32:42.509377  541833 fix.go:200] guest clock delta is within tolerance: 85.040745ms
	I0730 01:32:42.509382  541833 start.go:83] releasing machines lock for "kubernetes-upgrade-599146", held for 24.892130154s
	I0730 01:32:42.509406  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:42.509693  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetIP
	I0730 01:32:42.512534  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.512929  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.512957  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.513201  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:42.513777  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:42.513977  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:32:42.514068  541833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 01:32:42.514125  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:42.514223  541833 ssh_runner.go:195] Run: cat /version.json
	I0730 01:32:42.514258  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:32:42.517086  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.517219  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.517461  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.517486  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:42.517506  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.517588  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:42.517688  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:42.517875  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:32:42.517881  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:42.518060  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:42.518075  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:32:42.518202  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:32:42.518268  541833 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:32:42.518343  541833 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:32:42.632311  541833 ssh_runner.go:195] Run: systemctl --version
	I0730 01:32:42.638503  541833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 01:32:42.793952  541833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 01:32:42.802289  541833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 01:32:42.802370  541833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 01:32:42.820589  541833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 01:32:42.820621  541833 start.go:495] detecting cgroup driver to use...
	I0730 01:32:42.820698  541833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 01:32:42.837177  541833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 01:32:42.851502  541833 docker.go:217] disabling cri-docker service (if available) ...
	I0730 01:32:42.851574  541833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 01:32:42.867881  541833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 01:32:42.883664  541833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 01:32:43.012082  541833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 01:32:43.178888  541833 docker.go:233] disabling docker service ...
	I0730 01:32:43.178966  541833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 01:32:43.193155  541833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 01:32:43.205946  541833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 01:32:43.348254  541833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 01:32:43.479808  541833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 01:32:43.493658  541833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 01:32:43.510402  541833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0730 01:32:43.510473  541833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:32:43.522927  541833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 01:32:43.523016  541833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:32:43.534589  541833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:32:43.545915  541833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:32:43.557328  541833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 01:32:43.569916  541833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 01:32:43.580906  541833 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 01:32:43.580970  541833 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 01:32:43.593618  541833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 01:32:43.603382  541833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:32:43.732946  541833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 01:32:43.880475  541833 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 01:32:43.880559  541833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 01:32:43.885115  541833 start.go:563] Will wait 60s for crictl version
	I0730 01:32:43.885182  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:43.888479  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 01:32:43.939498  541833 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 01:32:43.939702  541833 ssh_runner.go:195] Run: crio --version
	I0730 01:32:43.973853  541833 ssh_runner.go:195] Run: crio --version
	I0730 01:32:44.003863  541833 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0730 01:32:44.005118  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetIP
	I0730 01:32:44.008382  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:44.008813  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:32:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:32:44.008845  541833 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:32:44.009038  541833 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0730 01:32:44.013058  541833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 01:32:44.025381  541833 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-599146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 01:32:44.025535  541833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 01:32:44.025610  541833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:32:44.059736  541833 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0730 01:32:44.059804  541833 ssh_runner.go:195] Run: which lz4
	I0730 01:32:44.063632  541833 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0730 01:32:44.067843  541833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0730 01:32:44.067872  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0730 01:32:45.574457  541833 crio.go:462] duration metric: took 1.510852136s to copy over tarball
	I0730 01:32:45.574545  541833 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0730 01:32:48.351682  541833 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.777095402s)
	I0730 01:32:48.351723  541833 crio.go:469] duration metric: took 2.777229248s to extract the tarball
	I0730 01:32:48.351734  541833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0730 01:32:48.393950  541833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:32:48.467881  541833 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0730 01:32:48.467923  541833 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0730 01:32:48.468020  541833 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:32:48.468044  541833 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:32:48.468077  541833 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:32:48.468108  541833 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:32:48.468138  541833 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0730 01:32:48.468110  541833 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:32:48.468079  541833 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0730 01:32:48.468051  541833 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0730 01:32:48.469766  541833 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0730 01:32:48.470052  541833 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:32:48.470055  541833 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:32:48.470076  541833 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0730 01:32:48.470188  541833 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:32:48.470343  541833 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:32:48.470465  541833 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:32:48.470519  541833 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0730 01:32:48.694440  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0730 01:32:48.716625  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0730 01:32:48.722096  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:32:48.723303  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:32:48.735855  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0730 01:32:48.735972  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:32:48.739119  541833 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0730 01:32:48.739188  541833 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0730 01:32:48.739232  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:48.750651  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:32:48.865507  541833 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0730 01:32:48.865567  541833 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0730 01:32:48.865620  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:48.888305  541833 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0730 01:32:48.888352  541833 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:32:48.888407  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:48.900340  541833 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0730 01:32:48.900384  541833 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:32:48.900443  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:48.900474  541833 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0730 01:32:48.900574  541833 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0730 01:32:48.900622  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:48.913824  541833 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0730 01:32:48.913875  541833 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:32:48.913918  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:48.913921  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0730 01:32:48.914059  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0730 01:32:48.914118  541833 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0730 01:32:48.914147  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:32:48.914172  541833 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:32:48.914221  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0730 01:32:48.914253  541833 ssh_runner.go:195] Run: which crictl
	I0730 01:32:48.914194  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:32:49.039567  541833 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0730 01:32:49.049295  541833 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0730 01:32:49.049335  541833 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0730 01:32:49.049391  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:32:49.049458  541833 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:32:49.049480  541833 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0730 01:32:49.049517  541833 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0730 01:32:49.095424  541833 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0730 01:32:49.095433  541833 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0730 01:32:49.393151  541833 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:32:49.534131  541833 cache_images.go:92] duration metric: took 1.066180002s to LoadCachedImages
	W0730 01:32:49.534255  541833 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0730 01:32:49.534274  541833 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.20.0 crio true true} ...
	I0730 01:32:49.534415  541833 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-599146 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 01:32:49.534512  541833 ssh_runner.go:195] Run: crio config
	I0730 01:32:49.581741  541833 cni.go:84] Creating CNI manager for ""
	I0730 01:32:49.581766  541833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:32:49.581778  541833 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 01:32:49.581825  541833 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-599146 NodeName:kubernetes-upgrade-599146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0730 01:32:49.581983  541833 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-599146"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 01:32:49.582066  541833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0730 01:32:49.592004  541833 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 01:32:49.592084  541833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 01:32:49.601502  541833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0730 01:32:49.617518  541833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 01:32:49.634292  541833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0730 01:32:49.650145  541833 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I0730 01:32:49.653576  541833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 01:32:49.664953  541833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:32:49.780477  541833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 01:32:49.799871  541833 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146 for IP: 192.168.50.97
	I0730 01:32:49.799897  541833 certs.go:194] generating shared ca certs ...
	I0730 01:32:49.799918  541833 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:32:49.800116  541833 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 01:32:49.800187  541833 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 01:32:49.800200  541833 certs.go:256] generating profile certs ...
	I0730 01:32:49.800269  541833 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/client.key
	I0730 01:32:49.800283  541833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/client.crt with IP's: []
	I0730 01:32:49.919267  541833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/client.crt ...
	I0730 01:32:49.919298  541833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/client.crt: {Name:mk2fd8be1edcc1409720daa94fb510cd67e18226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:32:49.919502  541833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/client.key ...
	I0730 01:32:49.919520  541833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/client.key: {Name:mkf12b166f088c334d412419a63bacd12ff49115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:32:49.919628  541833 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key.04b001d3
	I0730 01:32:49.919653  541833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.crt.04b001d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.97]
	I0730 01:32:50.062508  541833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.crt.04b001d3 ...
	I0730 01:32:50.062539  541833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.crt.04b001d3: {Name:mkb73115682d7fc46e43d0b76349ba27e955ae21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:32:50.062728  541833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key.04b001d3 ...
	I0730 01:32:50.062747  541833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key.04b001d3: {Name:mkf0f9e8fff4098515adb8115ca5dc5d4cb27c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:32:50.062856  541833 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.crt.04b001d3 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.crt
	I0730 01:32:50.062953  541833 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key.04b001d3 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key
	I0730 01:32:50.063029  541833 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.key
	I0730 01:32:50.063058  541833 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.crt with IP's: []
	I0730 01:32:50.119222  541833 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.crt ...
	I0730 01:32:50.119255  541833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.crt: {Name:mk8647895c459ef9eaa75b5a98c272e452e4d9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:32:50.119430  541833 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.key ...
	I0730 01:32:50.119450  541833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.key: {Name:mkd45d83067745c7ad5791fb380eb5f1cde0ab71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:32:50.119659  541833 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 01:32:50.119707  541833 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 01:32:50.119722  541833 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 01:32:50.119758  541833 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 01:32:50.119791  541833 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 01:32:50.119823  541833 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 01:32:50.119882  541833 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:32:50.120512  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 01:32:50.150765  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 01:32:50.176802  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 01:32:50.203587  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 01:32:50.232336  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0730 01:32:50.266379  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 01:32:50.295406  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 01:32:50.318869  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0730 01:32:50.341406  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 01:32:50.370025  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 01:32:50.395298  541833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 01:32:50.420243  541833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 01:32:50.438160  541833 ssh_runner.go:195] Run: openssl version
	I0730 01:32:50.444265  541833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 01:32:50.456010  541833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 01:32:50.460588  541833 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:32:50.460659  541833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 01:32:50.466492  541833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 01:32:50.478916  541833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 01:32:50.491370  541833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 01:32:50.495925  541833 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:32:50.495985  541833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 01:32:50.501373  541833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 01:32:50.513189  541833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 01:32:50.524910  541833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:32:50.529176  541833 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:32:50.529227  541833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:32:50.534670  541833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 01:32:50.546360  541833 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:32:50.550480  541833 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 01:32:50.550531  541833 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-599146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:32:50.550596  541833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 01:32:50.550666  541833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 01:32:50.592800  541833 cri.go:89] found id: ""
	I0730 01:32:50.592897  541833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 01:32:50.602873  541833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 01:32:50.613323  541833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 01:32:50.623650  541833 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 01:32:50.623670  541833 kubeadm.go:157] found existing configuration files:
	
	I0730 01:32:50.623721  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 01:32:50.632563  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 01:32:50.632641  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 01:32:50.642112  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 01:32:50.651116  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 01:32:50.651193  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 01:32:50.661227  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 01:32:50.672934  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 01:32:50.673010  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 01:32:50.685550  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 01:32:50.695037  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 01:32:50.695126  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 01:32:50.708202  541833 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0730 01:32:51.013932  541833 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 01:34:49.267169  541833 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0730 01:34:49.267254  541833 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0730 01:34:49.268665  541833 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0730 01:34:49.268744  541833 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 01:34:49.268829  541833 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 01:34:49.268970  541833 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 01:34:49.269133  541833 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 01:34:49.269252  541833 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 01:34:49.270943  541833 out.go:204]   - Generating certificates and keys ...
	I0730 01:34:49.271024  541833 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 01:34:49.271092  541833 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 01:34:49.271172  541833 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 01:34:49.271248  541833 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 01:34:49.271345  541833 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 01:34:49.271395  541833 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 01:34:49.271440  541833 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 01:34:49.271559  541833 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-599146 localhost] and IPs [192.168.50.97 127.0.0.1 ::1]
	I0730 01:34:49.271604  541833 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 01:34:49.271748  541833 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-599146 localhost] and IPs [192.168.50.97 127.0.0.1 ::1]
	I0730 01:34:49.271841  541833 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 01:34:49.271924  541833 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 01:34:49.271986  541833 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 01:34:49.272141  541833 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 01:34:49.272219  541833 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 01:34:49.272290  541833 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 01:34:49.272388  541833 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 01:34:49.272436  541833 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 01:34:49.272573  541833 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 01:34:49.272690  541833 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 01:34:49.272771  541833 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 01:34:49.272851  541833 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 01:34:49.274955  541833 out.go:204]   - Booting up control plane ...
	I0730 01:34:49.275045  541833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 01:34:49.275111  541833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 01:34:49.275169  541833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 01:34:49.275249  541833 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 01:34:49.275378  541833 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0730 01:34:49.275452  541833 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0730 01:34:49.275542  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:34:49.275732  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:34:49.275816  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:34:49.275974  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:34:49.276057  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:34:49.276248  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:34:49.276331  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:34:49.276483  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:34:49.276553  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:34:49.276743  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:34:49.276752  541833 kubeadm.go:310] 
	I0730 01:34:49.276824  541833 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0730 01:34:49.276886  541833 kubeadm.go:310] 		timed out waiting for the condition
	I0730 01:34:49.276894  541833 kubeadm.go:310] 
	I0730 01:34:49.276934  541833 kubeadm.go:310] 	This error is likely caused by:
	I0730 01:34:49.276972  541833 kubeadm.go:310] 		- The kubelet is not running
	I0730 01:34:49.277075  541833 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0730 01:34:49.277083  541833 kubeadm.go:310] 
	I0730 01:34:49.277182  541833 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0730 01:34:49.277224  541833 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0730 01:34:49.277260  541833 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0730 01:34:49.277269  541833 kubeadm.go:310] 
	I0730 01:34:49.277395  541833 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0730 01:34:49.277465  541833 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0730 01:34:49.277472  541833 kubeadm.go:310] 
	I0730 01:34:49.277559  541833 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0730 01:34:49.277640  541833 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0730 01:34:49.277708  541833 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0730 01:34:49.277771  541833 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0730 01:34:49.277826  541833 kubeadm.go:310] 
	W0730 01:34:49.277900  541833 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-599146 localhost] and IPs [192.168.50.97 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-599146 localhost] and IPs [192.168.50.97 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-599146 localhost] and IPs [192.168.50.97 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-599146 localhost] and IPs [192.168.50.97 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0730 01:34:49.277945  541833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0730 01:34:49.726798  541833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 01:34:49.741681  541833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 01:34:49.751433  541833 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 01:34:49.751458  541833 kubeadm.go:157] found existing configuration files:
	
	I0730 01:34:49.751526  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 01:34:49.761401  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 01:34:49.761460  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 01:34:49.770342  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 01:34:49.778843  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 01:34:49.778901  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 01:34:49.788053  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 01:34:49.796554  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 01:34:49.796626  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 01:34:49.805403  541833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 01:34:49.813571  541833 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 01:34:49.813628  541833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 01:34:49.822097  541833 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0730 01:34:50.026903  541833 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 01:36:45.838039  541833 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0730 01:36:45.838161  541833 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0730 01:36:45.839416  541833 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0730 01:36:45.839507  541833 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 01:36:45.839638  541833 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 01:36:45.839784  541833 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 01:36:45.839925  541833 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 01:36:45.840034  541833 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 01:36:45.912253  541833 out.go:204]   - Generating certificates and keys ...
	I0730 01:36:45.912400  541833 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 01:36:45.912507  541833 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 01:36:45.912618  541833 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0730 01:36:45.912760  541833 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0730 01:36:45.912871  541833 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0730 01:36:45.912968  541833 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0730 01:36:45.913074  541833 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0730 01:36:45.913160  541833 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0730 01:36:45.913293  541833 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0730 01:36:45.913400  541833 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0730 01:36:45.913451  541833 kubeadm.go:310] [certs] Using the existing "sa" key
	I0730 01:36:45.913524  541833 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 01:36:45.913590  541833 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 01:36:45.913663  541833 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 01:36:45.913746  541833 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 01:36:45.913851  541833 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 01:36:45.914010  541833 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 01:36:45.914136  541833 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 01:36:45.914189  541833 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 01:36:45.914265  541833 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 01:36:45.949762  541833 out.go:204]   - Booting up control plane ...
	I0730 01:36:45.949943  541833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 01:36:45.950066  541833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 01:36:45.950159  541833 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 01:36:45.950281  541833 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 01:36:45.950496  541833 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0730 01:36:45.950567  541833 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0730 01:36:45.950666  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:36:45.950929  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:36:45.951058  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:36:45.951306  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:36:45.951429  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:36:45.951699  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:36:45.951800  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:36:45.952044  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:36:45.952149  541833 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0730 01:36:45.952393  541833 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0730 01:36:45.952406  541833 kubeadm.go:310] 
	I0730 01:36:45.952464  541833 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0730 01:36:45.952524  541833 kubeadm.go:310] 		timed out waiting for the condition
	I0730 01:36:45.952535  541833 kubeadm.go:310] 
	I0730 01:36:45.952577  541833 kubeadm.go:310] 	This error is likely caused by:
	I0730 01:36:45.952632  541833 kubeadm.go:310] 		- The kubelet is not running
	I0730 01:36:45.952811  541833 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0730 01:36:45.952826  541833 kubeadm.go:310] 
	I0730 01:36:45.952991  541833 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0730 01:36:45.953055  541833 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0730 01:36:45.953111  541833 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0730 01:36:45.953126  541833 kubeadm.go:310] 
	I0730 01:36:45.953267  541833 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0730 01:36:45.953397  541833 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0730 01:36:45.953415  541833 kubeadm.go:310] 
	I0730 01:36:45.953584  541833 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0730 01:36:45.953714  541833 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0730 01:36:45.953825  541833 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0730 01:36:45.953920  541833 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0730 01:36:45.953943  541833 kubeadm.go:310] 
	I0730 01:36:45.954012  541833 kubeadm.go:394] duration metric: took 3m55.403483907s to StartCluster
	I0730 01:36:45.954077  541833 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 01:36:45.954145  541833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 01:36:46.000647  541833 cri.go:89] found id: ""
	I0730 01:36:46.000686  541833 logs.go:276] 0 containers: []
	W0730 01:36:46.000699  541833 logs.go:278] No container was found matching "kube-apiserver"
	I0730 01:36:46.000723  541833 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 01:36:46.000803  541833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 01:36:46.036498  541833 cri.go:89] found id: ""
	I0730 01:36:46.036537  541833 logs.go:276] 0 containers: []
	W0730 01:36:46.036549  541833 logs.go:278] No container was found matching "etcd"
	I0730 01:36:46.036557  541833 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 01:36:46.036630  541833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 01:36:46.074777  541833 cri.go:89] found id: ""
	I0730 01:36:46.074823  541833 logs.go:276] 0 containers: []
	W0730 01:36:46.074835  541833 logs.go:278] No container was found matching "coredns"
	I0730 01:36:46.074843  541833 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 01:36:46.074912  541833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 01:36:46.109364  541833 cri.go:89] found id: ""
	I0730 01:36:46.109397  541833 logs.go:276] 0 containers: []
	W0730 01:36:46.109409  541833 logs.go:278] No container was found matching "kube-scheduler"
	I0730 01:36:46.109417  541833 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 01:36:46.109488  541833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 01:36:46.145088  541833 cri.go:89] found id: ""
	I0730 01:36:46.145119  541833 logs.go:276] 0 containers: []
	W0730 01:36:46.145131  541833 logs.go:278] No container was found matching "kube-proxy"
	I0730 01:36:46.145138  541833 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 01:36:46.145214  541833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 01:36:46.189153  541833 cri.go:89] found id: ""
	I0730 01:36:46.189189  541833 logs.go:276] 0 containers: []
	W0730 01:36:46.189201  541833 logs.go:278] No container was found matching "kube-controller-manager"
	I0730 01:36:46.189210  541833 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 01:36:46.189281  541833 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 01:36:46.226055  541833 cri.go:89] found id: ""
	I0730 01:36:46.226094  541833 logs.go:276] 0 containers: []
	W0730 01:36:46.226106  541833 logs.go:278] No container was found matching "kindnet"
	I0730 01:36:46.226121  541833 logs.go:123] Gathering logs for kubelet ...
	I0730 01:36:46.226137  541833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 01:36:46.295713  541833 logs.go:123] Gathering logs for dmesg ...
	I0730 01:36:46.295766  541833 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 01:36:46.312456  541833 logs.go:123] Gathering logs for describe nodes ...
	I0730 01:36:46.312489  541833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0730 01:36:46.431362  541833 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0730 01:36:46.431403  541833 logs.go:123] Gathering logs for CRI-O ...
	I0730 01:36:46.431423  541833 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 01:36:46.535055  541833 logs.go:123] Gathering logs for container status ...
	I0730 01:36:46.535105  541833 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0730 01:36:46.585637  541833 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0730 01:36:46.585696  541833 out.go:239] * 
	* 
	W0730 01:36:46.585834  541833 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0730 01:36:46.585871  541833 out.go:239] * 
	* 
	W0730 01:36:46.587145  541833 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0730 01:36:46.590952  541833 out.go:177] 
	W0730 01:36:46.592215  541833 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0730 01:36:46.592283  541833 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0730 01:36:46.592310  541833 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0730 01:36:46.593643  541833 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-599146
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-599146: (6.325287002s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-599146 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-599146 status --format={{.Host}}: exit status 7 (76.188482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.457632278s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-599146 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.572888ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-599146] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19346
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-599146
	    minikube start -p kubernetes-upgrade-599146 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5991462 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-599146 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-599146 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m16.392847653s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-30 01:39:16.0695412 +0000 UTC m=+5710.839143470
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-599146 -n kubernetes-upgrade-599146
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-599146 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-599146 logs -n 25: (2.071852874s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-428243 sudo cat             | cilium-428243             | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC |                     |
	|         | /etc/containerd/config.toml           |                           |         |         |                     |                     |
	| ssh     | -p cilium-428243 sudo                 | cilium-428243             | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-428243 sudo                 | cilium-428243             | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-428243 sudo                 | cilium-428243             | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-428243 sudo find            | cilium-428243             | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-428243 sudo crio            | cilium-428243             | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-428243                      | cilium-428243             | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC | 30 Jul 24 01:35 UTC |
	| start   | -p cert-expiration-050894             | cert-expiration-050894    | jenkins | v1.33.1 | 30 Jul 24 01:35 UTC | 30 Jul 24 01:36 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-596705 stop           | minikube                  | jenkins | v1.26.0 | 30 Jul 24 01:36 UTC | 30 Jul 24 01:36 UTC |
	| start   | -p stopped-upgrade-596705             | stopped-upgrade-596705    | jenkins | v1.33.1 | 30 Jul 24 01:36 UTC | 30 Jul 24 01:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-191803           | force-systemd-env-191803  | jenkins | v1.33.1 | 30 Jul 24 01:36 UTC | 30 Jul 24 01:36 UTC |
	| start   | -p force-systemd-flag-452226          | force-systemd-flag-452226 | jenkins | v1.33.1 | 30 Jul 24 01:36 UTC | 30 Jul 24 01:37 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-599146          | kubernetes-upgrade-599146 | jenkins | v1.33.1 | 30 Jul 24 01:36 UTC | 30 Jul 24 01:36 UTC |
	| start   | -p kubernetes-upgrade-599146          | kubernetes-upgrade-599146 | jenkins | v1.33.1 | 30 Jul 24 01:36 UTC | 30 Jul 24 01:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-596705             | stopped-upgrade-596705    | jenkins | v1.33.1 | 30 Jul 24 01:37 UTC | 30 Jul 24 01:37 UTC |
	| start   | -p cert-options-398469                | cert-options-398469       | jenkins | v1.33.1 | 30 Jul 24 01:37 UTC | 30 Jul 24 01:38 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-452226 ssh cat     | force-systemd-flag-452226 | jenkins | v1.33.1 | 30 Jul 24 01:37 UTC | 30 Jul 24 01:37 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-452226          | force-systemd-flag-452226 | jenkins | v1.33.1 | 30 Jul 24 01:37 UTC | 30 Jul 24 01:37 UTC |
	| start   | -p old-k8s-version-978883             | old-k8s-version-978883    | jenkins | v1.33.1 | 30 Jul 24 01:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599146          | kubernetes-upgrade-599146 | jenkins | v1.33.1 | 30 Jul 24 01:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599146          | kubernetes-upgrade-599146 | jenkins | v1.33.1 | 30 Jul 24 01:37 UTC | 30 Jul 24 01:39 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-398469 ssh               | cert-options-398469       | jenkins | v1.33.1 | 30 Jul 24 01:38 UTC | 30 Jul 24 01:38 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-398469 -- sudo        | cert-options-398469       | jenkins | v1.33.1 | 30 Jul 24 01:38 UTC | 30 Jul 24 01:38 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-398469                | cert-options-398469       | jenkins | v1.33.1 | 30 Jul 24 01:38 UTC | 30 Jul 24 01:38 UTC |
	| start   | -p no-preload-123365 --memory=2200    | no-preload-123365         | jenkins | v1.33.1 | 30 Jul 24 01:38 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 01:38:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 01:38:27.953860  550170 out.go:291] Setting OutFile to fd 1 ...
	I0730 01:38:27.954006  550170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:38:27.954017  550170 out.go:304] Setting ErrFile to fd 2...
	I0730 01:38:27.954023  550170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:38:27.954214  550170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 01:38:27.954859  550170 out.go:298] Setting JSON to false
	I0730 01:38:27.955931  550170 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12050,"bootTime":1722291458,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 01:38:27.955997  550170 start.go:139] virtualization: kvm guest
	I0730 01:38:27.958517  550170 out.go:177] * [no-preload-123365] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 01:38:27.960149  550170 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 01:38:27.960288  550170 notify.go:220] Checking for updates...
	I0730 01:38:27.962746  550170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 01:38:27.964128  550170 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 01:38:27.965641  550170 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:38:27.966855  550170 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 01:38:27.968075  550170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 01:38:27.969766  550170 config.go:182] Loaded profile config "cert-expiration-050894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 01:38:27.969874  550170 config.go:182] Loaded profile config "kubernetes-upgrade-599146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0730 01:38:27.969961  550170 config.go:182] Loaded profile config "old-k8s-version-978883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0730 01:38:27.970061  550170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 01:38:28.009017  550170 out.go:177] * Using the kvm2 driver based on user configuration
	I0730 01:38:28.010545  550170 start.go:297] selected driver: kvm2
	I0730 01:38:28.010563  550170 start.go:901] validating driver "kvm2" against <nil>
	I0730 01:38:28.010579  550170 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 01:38:28.011441  550170 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.011550  550170 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 01:38:28.028296  550170 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 01:38:28.028372  550170 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 01:38:28.029172  550170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 01:38:28.029225  550170 cni.go:84] Creating CNI manager for ""
	I0730 01:38:28.029232  550170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:38:28.029239  550170 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0730 01:38:28.029304  550170 start.go:340] cluster config:
	{Name:no-preload-123365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-123365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:38:28.029401  550170 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.031234  550170 out.go:177] * Starting "no-preload-123365" primary control-plane node in "no-preload-123365" cluster
	I0730 01:38:28.032509  550170 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 01:38:28.032634  550170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/no-preload-123365/config.json ...
	I0730 01:38:28.032663  550170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/no-preload-123365/config.json: {Name:mkfb2eb1d777fc04fbdea2d31f046d51c1f48d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:28.032773  550170 cache.go:107] acquiring lock: {Name:mk27ecb264b6b56fb1b7f28b435ee4b5f52a5b08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.032834  550170 cache.go:107] acquiring lock: {Name:mk7d25d02963f77b5cdddf8408abb489a01a5fd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.032831  550170 cache.go:107] acquiring lock: {Name:mk055774fe4b7357eeb640cbc74f4a27f7a389ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.032923  550170 start.go:360] acquireMachinesLock for no-preload-123365: {Name:mk96fc86c0ad2e3d5d383f770446c5d8531973ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0730 01:38:28.032773  550170 cache.go:107] acquiring lock: {Name:mkc3e475f0d349304056e53a323c6d91e82f83da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.032802  550170 cache.go:107] acquiring lock: {Name:mkfb2fd5423e3578496f93f1b6a8d0b3dcb3cbd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.032950  550170 cache.go:107] acquiring lock: {Name:mk1da0d74e0002021a58834eff6f059f2397e6af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.032964  550170 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0730 01:38:28.032994  550170 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0730 01:38:28.033003  550170 cache.go:115] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0730 01:38:28.033015  550170 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 255.685µs
	I0730 01:38:28.033027  550170 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0730 01:38:28.033026  550170 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0730 01:38:28.032810  550170 cache.go:107] acquiring lock: {Name:mk6da395b20e517652784615f709f65c49ac5ed5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.033084  550170 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0730 01:38:28.033099  550170 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0730 01:38:28.033126  550170 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0730 01:38:28.032854  550170 cache.go:107] acquiring lock: {Name:mk34325e5b117df4e3f7a44786aaa14483f4c1f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 01:38:28.033310  550170 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0730 01:38:28.034432  550170 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0730 01:38:28.034622  550170 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0730 01:38:28.034637  550170 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0730 01:38:28.034651  550170 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0730 01:38:28.034432  550170 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0730 01:38:28.035010  550170 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0730 01:38:28.035075  550170 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0730 01:38:28.585935  550170 cache.go:162] opening:  /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0730 01:38:28.599663  550170 cache.go:162] opening:  /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0730 01:38:28.605444  550170 cache.go:162] opening:  /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0730 01:38:28.610512  550170 cache.go:162] opening:  /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0730 01:38:28.613165  550170 cache.go:162] opening:  /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0730 01:38:28.614688  550170 cache.go:162] opening:  /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0730 01:38:28.620114  550170 cache.go:162] opening:  /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0730 01:38:28.712943  550170 cache.go:157] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0730 01:38:28.712973  550170 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 680.166262ms
	I0730 01:38:28.712988  550170 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0730 01:38:29.232829  550170 cache.go:157] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0730 01:38:29.232855  550170 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 1.200001643s
	I0730 01:38:29.232871  550170 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0730 01:38:30.197044  550170 cache.go:157] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0730 01:38:30.197071  550170 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.164240238s
	I0730 01:38:30.197084  550170 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0730 01:38:30.501443  550170 cache.go:157] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0730 01:38:30.501467  550170 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 2.468685515s
	I0730 01:38:30.501479  550170 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0730 01:38:30.596687  550170 cache.go:157] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0730 01:38:30.596735  550170 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 2.56398509s
	I0730 01:38:30.596753  550170 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0730 01:38:30.606413  550170 cache.go:157] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0730 01:38:30.606450  550170 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 2.573651271s
	I0730 01:38:30.606466  550170 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0730 01:38:31.106996  550170 cache.go:157] /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0730 01:38:31.107026  550170 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 3.074101261s
	I0730 01:38:31.107045  550170 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0730 01:38:31.107066  550170 cache.go:87] Successfully saved all images to host disk.
	I0730 01:38:33.501470  549668 start.go:364] duration metric: took 33.660483449s to acquireMachinesLock for "kubernetes-upgrade-599146"
	I0730 01:38:33.501528  549668 start.go:96] Skipping create...Using existing machine configuration
	I0730 01:38:33.501539  549668 fix.go:54] fixHost starting: 
	I0730 01:38:33.501933  549668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:38:33.501971  549668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:38:33.521938  549668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0730 01:38:33.522418  549668 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:38:33.522909  549668 main.go:141] libmachine: Using API Version  1
	I0730 01:38:33.522943  549668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:38:33.523334  549668 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:38:33.523518  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:38:33.523673  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetState
	I0730 01:38:33.525388  549668 fix.go:112] recreateIfNeeded on kubernetes-upgrade-599146: state=Running err=<nil>
	W0730 01:38:33.525422  549668 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 01:38:33.527390  549668 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-599146" VM ...
	I0730 01:38:33.528660  549668 machine.go:94] provisionDockerMachine start ...
	I0730 01:38:33.528685  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:38:33.528934  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:33.531755  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.532167  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:33.532196  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.532304  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:33.532498  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:33.532633  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:33.532776  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:33.532924  549668 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:33.533114  549668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:38:33.533126  549668 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 01:38:33.646715  549668 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-599146
	
	I0730 01:38:33.646749  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetMachineName
	I0730 01:38:33.647068  549668 buildroot.go:166] provisioning hostname "kubernetes-upgrade-599146"
	I0730 01:38:33.647104  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetMachineName
	I0730 01:38:33.647315  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:33.650312  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.650736  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:33.650764  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.650916  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:33.651145  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:33.651362  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:33.651550  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:33.651760  549668 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:33.651978  549668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:38:33.651995  549668 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-599146 && echo "kubernetes-upgrade-599146" | sudo tee /etc/hostname
	I0730 01:38:33.779181  549668 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-599146
	
	I0730 01:38:33.779216  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:33.781957  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.782323  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:33.782351  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.782559  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:33.782760  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:33.782916  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:33.783026  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:33.783203  549668 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:33.783434  549668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:38:33.783454  549668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-599146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-599146/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-599146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 01:38:33.901308  549668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:38:33.901346  549668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 01:38:33.901368  549668 buildroot.go:174] setting up certificates
	I0730 01:38:33.901377  549668 provision.go:84] configureAuth start
	I0730 01:38:33.901388  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetMachineName
	I0730 01:38:33.901707  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetIP
	I0730 01:38:33.904455  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.904832  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:33.904871  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.905032  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:33.907429  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.907838  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:33.907864  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:33.907970  549668 provision.go:143] copyHostCerts
	I0730 01:38:33.908044  549668 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 01:38:33.908061  549668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:38:33.908145  549668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 01:38:33.908268  549668 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 01:38:33.908279  549668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:38:33.908304  549668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 01:38:33.908367  549668 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 01:38:33.908373  549668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:38:33.908391  549668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 01:38:33.908442  549668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-599146 san=[127.0.0.1 192.168.50.97 kubernetes-upgrade-599146 localhost minikube]
	I0730 01:38:34.358356  549668 provision.go:177] copyRemoteCerts
	I0730 01:38:34.358423  549668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 01:38:34.358448  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:34.361387  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:34.361803  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:34.361841  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:34.362026  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:34.362319  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:34.362508  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:34.362721  549668 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:38:34.447387  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 01:38:34.474510  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0730 01:38:34.502455  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 01:38:34.530218  549668 provision.go:87] duration metric: took 628.827426ms to configureAuth
	I0730 01:38:34.530256  549668 buildroot.go:189] setting minikube options for container-runtime
	I0730 01:38:34.530404  549668 config.go:182] Loaded profile config "kubernetes-upgrade-599146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0730 01:38:34.530482  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:34.533645  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:34.534009  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:34.534037  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:34.534288  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:34.534507  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:34.534679  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:34.534842  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:34.535049  549668 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:34.535282  549668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:38:34.535313  549668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 01:38:32.053938  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.054444  549392 main.go:141] libmachine: (old-k8s-version-978883) Found IP for machine: 192.168.61.3
	I0730 01:38:32.054470  549392 main.go:141] libmachine: (old-k8s-version-978883) Reserving static IP address...
	I0730 01:38:32.054484  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has current primary IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.054854  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-978883", mac: "52:54:00:76:ab:5c", ip: "192.168.61.3"} in network mk-old-k8s-version-978883
	I0730 01:38:32.132445  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | Getting to WaitForSSH function...
	I0730 01:38:32.132480  549392 main.go:141] libmachine: (old-k8s-version-978883) Reserved static IP address: 192.168.61.3
	I0730 01:38:32.132529  549392 main.go:141] libmachine: (old-k8s-version-978883) Waiting for SSH to be available...
	I0730 01:38:32.135160  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.135548  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.135577  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.135687  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | Using SSH client type: external
	I0730 01:38:32.135715  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | Using SSH private key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/old-k8s-version-978883/id_rsa (-rw-------)
	I0730 01:38:32.135750  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19346-495103/.minikube/machines/old-k8s-version-978883/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0730 01:38:32.135765  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | About to run SSH command:
	I0730 01:38:32.135778  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | exit 0
	I0730 01:38:32.260919  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | SSH cmd err, output: <nil>: 
	I0730 01:38:32.261096  549392 main.go:141] libmachine: (old-k8s-version-978883) KVM machine creation complete!
	I0730 01:38:32.261629  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetConfigRaw
	I0730 01:38:32.262342  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .DriverName
	I0730 01:38:32.262607  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .DriverName
	I0730 01:38:32.262815  549392 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0730 01:38:32.262832  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetState
	I0730 01:38:32.264152  549392 main.go:141] libmachine: Detecting operating system of created instance...
	I0730 01:38:32.264168  549392 main.go:141] libmachine: Waiting for SSH to be available...
	I0730 01:38:32.264176  549392 main.go:141] libmachine: Getting to WaitForSSH function...
	I0730 01:38:32.264185  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:32.267302  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.267751  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.267781  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.267966  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:32.268178  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.268374  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.268542  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:32.268760  549392 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:32.269012  549392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0730 01:38:32.269030  549392 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0730 01:38:32.375878  549392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:38:32.375912  549392 main.go:141] libmachine: Detecting the provisioner...
	I0730 01:38:32.375924  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:32.378880  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.379389  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.379420  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.379638  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:32.379849  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.380062  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.380246  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:32.380439  549392 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:32.380668  549392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0730 01:38:32.380683  549392 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0730 01:38:32.489292  549392 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0730 01:38:32.489383  549392 main.go:141] libmachine: found compatible host: buildroot
	I0730 01:38:32.489397  549392 main.go:141] libmachine: Provisioning with buildroot...
	I0730 01:38:32.489412  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetMachineName
	I0730 01:38:32.489684  549392 buildroot.go:166] provisioning hostname "old-k8s-version-978883"
	I0730 01:38:32.489711  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetMachineName
	I0730 01:38:32.489909  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:32.492559  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.492967  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.492996  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.493189  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:32.493321  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.493435  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.493612  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:32.493825  549392 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:32.494019  549392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0730 01:38:32.494034  549392 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-978883 && echo "old-k8s-version-978883" | sudo tee /etc/hostname
	I0730 01:38:32.616263  549392 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-978883
	
	I0730 01:38:32.616298  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:32.618997  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.619394  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.619420  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.619553  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:32.619783  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.619958  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.620128  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:32.620301  549392 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:32.620496  549392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0730 01:38:32.620519  549392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-978883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-978883/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-978883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 01:38:32.737374  549392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 01:38:32.737410  549392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19346-495103/.minikube CaCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19346-495103/.minikube}
	I0730 01:38:32.737467  549392 buildroot.go:174] setting up certificates
	I0730 01:38:32.737479  549392 provision.go:84] configureAuth start
	I0730 01:38:32.737492  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetMachineName
	I0730 01:38:32.737772  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetIP
	I0730 01:38:32.740552  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.740971  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.741001  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.741197  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:32.743484  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.743850  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.743872  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.744000  549392 provision.go:143] copyHostCerts
	I0730 01:38:32.744058  549392 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem, removing ...
	I0730 01:38:32.744069  549392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem
	I0730 01:38:32.744134  549392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/ca.pem (1082 bytes)
	I0730 01:38:32.744218  549392 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem, removing ...
	I0730 01:38:32.744226  549392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem
	I0730 01:38:32.744248  549392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/cert.pem (1123 bytes)
	I0730 01:38:32.744299  549392 exec_runner.go:144] found /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem, removing ...
	I0730 01:38:32.744305  549392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem
	I0730 01:38:32.744321  549392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19346-495103/.minikube/key.pem (1679 bytes)
	I0730 01:38:32.744369  549392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-978883 san=[127.0.0.1 192.168.61.3 localhost minikube old-k8s-version-978883]
	I0730 01:38:32.830607  549392 provision.go:177] copyRemoteCerts
	I0730 01:38:32.830668  549392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 01:38:32.830695  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:32.833551  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.833916  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.833935  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.834178  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:32.834403  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.834547  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:32.834663  549392 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/old-k8s-version-978883/id_rsa Username:docker}
	I0730 01:38:32.918148  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0730 01:38:32.939992  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0730 01:38:32.963693  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0730 01:38:32.987887  549392 provision.go:87] duration metric: took 250.391842ms to configureAuth
	I0730 01:38:32.987920  549392 buildroot.go:189] setting minikube options for container-runtime
	I0730 01:38:32.988082  549392 config.go:182] Loaded profile config "old-k8s-version-978883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0730 01:38:32.988163  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:32.991085  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.991473  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:32.991506  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:32.991730  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:32.991941  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.992138  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:32.992321  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:32.992493  549392 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:32.992669  549392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0730 01:38:32.992684  549392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 01:38:33.258880  549392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 01:38:33.258913  549392 main.go:141] libmachine: Checking connection to Docker...
	I0730 01:38:33.258922  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetURL
	I0730 01:38:33.260258  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | Using libvirt version 6000000
	I0730 01:38:33.262296  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.262717  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:33.262756  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.262921  549392 main.go:141] libmachine: Docker is up and running!
	I0730 01:38:33.262940  549392 main.go:141] libmachine: Reticulating splines...
	I0730 01:38:33.262950  549392 client.go:171] duration metric: took 24.754338086s to LocalClient.Create
	I0730 01:38:33.262975  549392 start.go:167] duration metric: took 24.754399828s to libmachine.API.Create "old-k8s-version-978883"
	I0730 01:38:33.263004  549392 start.go:293] postStartSetup for "old-k8s-version-978883" (driver="kvm2")
	I0730 01:38:33.263044  549392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 01:38:33.263072  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .DriverName
	I0730 01:38:33.263376  549392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 01:38:33.263404  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:33.266170  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.266540  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:33.266572  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.266723  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:33.266908  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:33.267110  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:33.267254  549392 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/old-k8s-version-978883/id_rsa Username:docker}
	I0730 01:38:33.350957  549392 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 01:38:33.354782  549392 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 01:38:33.354810  549392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 01:38:33.354886  549392 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 01:38:33.354995  549392 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 01:38:33.355110  549392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 01:38:33.363980  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:38:33.386157  549392 start.go:296] duration metric: took 123.11331ms for postStartSetup
	I0730 01:38:33.386229  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetConfigRaw
	I0730 01:38:33.386882  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetIP
	I0730 01:38:33.389696  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.390065  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:33.390096  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.390329  549392 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/config.json ...
	I0730 01:38:33.390520  549392 start.go:128] duration metric: took 24.904309805s to createHost
	I0730 01:38:33.390544  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:33.392819  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.393168  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:33.393201  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.393371  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:33.393601  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:33.393774  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:33.393909  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:33.394046  549392 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:33.394220  549392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0730 01:38:33.394232  549392 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 01:38:33.501332  549392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722303513.476495317
	
	I0730 01:38:33.501362  549392 fix.go:216] guest clock: 1722303513.476495317
	I0730 01:38:33.501373  549392 fix.go:229] Guest: 2024-07-30 01:38:33.476495317 +0000 UTC Remote: 2024-07-30 01:38:33.390531517 +0000 UTC m=+51.427253556 (delta=85.9638ms)
	I0730 01:38:33.501401  549392 fix.go:200] guest clock delta is within tolerance: 85.9638ms
	I0730 01:38:33.501408  549392 start.go:83] releasing machines lock for "old-k8s-version-978883", held for 25.015377804s
	I0730 01:38:33.501436  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .DriverName
	I0730 01:38:33.501818  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetIP
	I0730 01:38:33.504373  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.504745  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:33.504776  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.504899  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .DriverName
	I0730 01:38:33.505435  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .DriverName
	I0730 01:38:33.505627  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .DriverName
	I0730 01:38:33.505721  549392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 01:38:33.505769  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:33.505897  549392 ssh_runner.go:195] Run: cat /version.json
	I0730 01:38:33.505920  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHHostname
	I0730 01:38:33.508524  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.508765  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.508958  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:33.508982  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.509253  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:33.509276  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:33.509310  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:33.509432  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHPort
	I0730 01:38:33.509519  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:33.509683  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHKeyPath
	I0730 01:38:33.509719  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:33.509847  549392 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/old-k8s-version-978883/id_rsa Username:docker}
	I0730 01:38:33.510118  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetSSHUsername
	I0730 01:38:33.510309  549392 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/old-k8s-version-978883/id_rsa Username:docker}
	I0730 01:38:33.624559  549392 ssh_runner.go:195] Run: systemctl --version
	I0730 01:38:33.630536  549392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 01:38:33.798538  549392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 01:38:33.810949  549392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 01:38:33.811020  549392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 01:38:33.829499  549392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0730 01:38:33.829525  549392 start.go:495] detecting cgroup driver to use...
	I0730 01:38:33.829591  549392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 01:38:33.849535  549392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 01:38:33.865259  549392 docker.go:217] disabling cri-docker service (if available) ...
	I0730 01:38:33.865343  549392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 01:38:33.879213  549392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 01:38:33.894108  549392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 01:38:34.022109  549392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 01:38:34.160898  549392 docker.go:233] disabling docker service ...
	I0730 01:38:34.160969  549392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 01:38:34.175270  549392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 01:38:34.187810  549392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 01:38:34.322318  549392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 01:38:34.439145  549392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 01:38:34.453575  549392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 01:38:34.471330  549392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0730 01:38:34.471422  549392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:34.482619  549392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 01:38:34.482699  549392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:34.493750  549392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:34.507938  549392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:34.520812  549392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 01:38:34.535228  549392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 01:38:34.544604  549392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0730 01:38:34.544673  549392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0730 01:38:34.557988  549392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 01:38:34.567274  549392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:38:34.694752  549392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 01:38:34.846417  549392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 01:38:34.846496  549392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 01:38:34.851119  549392 start.go:563] Will wait 60s for crictl version
	I0730 01:38:34.851179  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:34.854599  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 01:38:34.897520  549392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 01:38:34.897611  549392 ssh_runner.go:195] Run: crio --version
	I0730 01:38:34.927916  549392 ssh_runner.go:195] Run: crio --version
	I0730 01:38:34.959324  549392 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0730 01:38:34.960446  549392 main.go:141] libmachine: (old-k8s-version-978883) Calling .GetIP
	I0730 01:38:34.963076  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:34.963485  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ab:5c", ip: ""} in network mk-old-k8s-version-978883: {Iface:virbr3 ExpiryTime:2024-07-30 02:38:23 +0000 UTC Type:0 Mac:52:54:00:76:ab:5c Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:old-k8s-version-978883 Clientid:01:52:54:00:76:ab:5c}
	I0730 01:38:34.963525  549392 main.go:141] libmachine: (old-k8s-version-978883) DBG | domain old-k8s-version-978883 has defined IP address 192.168.61.3 and MAC address 52:54:00:76:ab:5c in network mk-old-k8s-version-978883
	I0730 01:38:34.963733  549392 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0730 01:38:34.967661  549392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 01:38:34.979837  549392 kubeadm.go:883] updating cluster {Name:old-k8s-version-978883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-978883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 01:38:34.979992  549392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 01:38:34.980047  549392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:38:35.010391  549392 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0730 01:38:35.010476  549392 ssh_runner.go:195] Run: which lz4
	I0730 01:38:35.014462  549392 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0730 01:38:35.018404  549392 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0730 01:38:35.018446  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0730 01:38:36.406462  549392 crio.go:462] duration metric: took 1.392040298s to copy over tarball
	I0730 01:38:36.406542  549392 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0730 01:38:40.806370  550170 start.go:364] duration metric: took 12.773414791s to acquireMachinesLock for "no-preload-123365"
	I0730 01:38:40.806439  550170 start.go:93] Provisioning new machine with config: &{Name:no-preload-123365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-123365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 01:38:40.806571  550170 start.go:125] createHost starting for "" (driver="kvm2")
	I0730 01:38:38.844525  549392 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437948772s)
	I0730 01:38:38.844563  549392 crio.go:469] duration metric: took 2.438068256s to extract the tarball
	I0730 01:38:38.844572  549392 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0730 01:38:38.885598  549392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:38:38.932491  549392 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0730 01:38:38.932519  549392 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0730 01:38:38.932592  549392 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:38:38.932617  549392 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:38:38.932631  549392 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0730 01:38:38.932660  549392 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:38:38.932644  549392 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:38:38.932593  549392 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:38:38.932740  549392 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0730 01:38:38.932747  549392 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0730 01:38:38.933908  549392 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0730 01:38:38.934096  549392 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:38:38.934175  549392 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:38:38.934199  549392 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0730 01:38:38.934263  549392 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0730 01:38:38.934278  549392 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:38:38.934278  549392 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:38:38.934200  549392 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:38:39.154356  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0730 01:38:39.162402  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:38:39.169747  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:38:39.181774  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:38:39.189607  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0730 01:38:39.201502  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0730 01:38:39.227309  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:38:39.231634  549392 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0730 01:38:39.231689  549392 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0730 01:38:39.231733  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:39.255373  549392 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0730 01:38:39.255417  549392 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:38:39.255469  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:39.281052  549392 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0730 01:38:39.281104  549392 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:38:39.281163  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:39.322244  549392 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0730 01:38:39.322297  549392 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:38:39.322359  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:39.332540  549392 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0730 01:38:39.332587  549392 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0730 01:38:39.332613  549392 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0730 01:38:39.332646  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:39.332651  549392 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0730 01:38:39.332691  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:39.337416  549392 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0730 01:38:39.337456  549392 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:38:39.337470  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0730 01:38:39.337487  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0730 01:38:39.337511  549392 ssh_runner.go:195] Run: which crictl
	I0730 01:38:39.337533  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0730 01:38:39.337539  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0730 01:38:39.339958  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0730 01:38:39.339994  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0730 01:38:39.451455  549392 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0730 01:38:39.453536  549392 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0730 01:38:39.453573  549392 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0730 01:38:39.453629  549392 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0730 01:38:39.453671  549392 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0730 01:38:39.456386  549392 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0730 01:38:39.456408  549392 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0730 01:38:39.494450  549392 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0730 01:38:40.253547  549392 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 01:38:40.390721  549392 cache_images.go:92] duration metric: took 1.458184205s to LoadCachedImages
	W0730 01:38:40.390803  549392 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19346-495103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0730 01:38:40.390823  549392 kubeadm.go:934] updating node { 192.168.61.3 8443 v1.20.0 crio true true} ...
	I0730 01:38:40.390982  549392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-978883 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-978883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 01:38:40.391088  549392 ssh_runner.go:195] Run: crio config
	I0730 01:38:40.440355  549392 cni.go:84] Creating CNI manager for ""
	I0730 01:38:40.440377  549392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:38:40.440387  549392 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 01:38:40.440406  549392 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-978883 NodeName:old-k8s-version-978883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0730 01:38:40.440574  549392 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-978883"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 01:38:40.440658  549392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0730 01:38:40.451487  549392 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 01:38:40.451584  549392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 01:38:40.462428  549392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0730 01:38:40.479502  549392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 01:38:40.496025  549392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0730 01:38:40.513572  549392 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0730 01:38:40.517893  549392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 01:38:40.530023  549392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:38:40.652968  549392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 01:38:40.670124  549392 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883 for IP: 192.168.61.3
	I0730 01:38:40.670155  549392 certs.go:194] generating shared ca certs ...
	I0730 01:38:40.670180  549392 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:40.670375  549392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 01:38:40.670440  549392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 01:38:40.670455  549392 certs.go:256] generating profile certs ...
	I0730 01:38:40.670549  549392 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/client.key
	I0730 01:38:40.670570  549392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/client.crt with IP's: []
	I0730 01:38:40.783351  549392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/client.crt ...
	I0730 01:38:40.783395  549392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/client.crt: {Name:mk81f153b83f7d2f7c00da2366ef4015f27ac11e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:40.783626  549392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/client.key ...
	I0730 01:38:40.783647  549392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/client.key: {Name:mkbbd083f52b7789b5577663346fd519e43d725c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:40.783765  549392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.key.02bb6568
	I0730 01:38:40.783793  549392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.crt.02bb6568 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.3]
	I0730 01:38:41.401997  549392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.crt.02bb6568 ...
	I0730 01:38:41.402041  549392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.crt.02bb6568: {Name:mkc8739e8ea1c67835f61759319d810b10d80e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:41.402235  549392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.key.02bb6568 ...
	I0730 01:38:41.402260  549392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.key.02bb6568: {Name:mka29a9613b237537b082c2f63ffda5302c27cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:41.402363  549392 certs.go:381] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.crt.02bb6568 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.crt
	I0730 01:38:41.402454  549392 certs.go:385] copying /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.key.02bb6568 -> /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.key
	I0730 01:38:41.402532  549392 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.key
	I0730 01:38:41.402554  549392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.crt with IP's: []
	I0730 01:38:41.483877  549392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.crt ...
	I0730 01:38:41.483916  549392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.crt: {Name:mk30b8163d8025d71b67d601448d6b9bc6543cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:41.529409  549392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.key ...
	I0730 01:38:41.529456  549392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.key: {Name:mke5b61e48b3be7709ab744b3e62a9e02bc59161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:41.529759  549392 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 01:38:41.529817  549392 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 01:38:41.529832  549392 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 01:38:41.529864  549392 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 01:38:41.529896  549392 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 01:38:41.529922  549392 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 01:38:41.529969  549392 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:38:41.530862  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 01:38:41.568598  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 01:38:41.604540  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 01:38:41.638185  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 01:38:41.666280  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0730 01:38:41.699188  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 01:38:41.722562  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 01:38:41.755478  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/old-k8s-version-978883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 01:38:41.779974  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 01:38:41.805286  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 01:38:41.828190  549392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 01:38:41.856663  549392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 01:38:41.875324  549392 ssh_runner.go:195] Run: openssl version
	I0730 01:38:41.881199  549392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 01:38:41.892887  549392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 01:38:41.897411  549392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:38:41.897485  549392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 01:38:41.903049  549392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 01:38:41.916825  549392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 01:38:41.928007  549392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:38:41.932303  549392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:38:41.932369  549392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:38:41.938105  549392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 01:38:41.952620  549392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 01:38:41.966334  549392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 01:38:41.970847  549392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:38:41.970918  549392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 01:38:41.976697  549392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 01:38:41.987524  549392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:38:41.991628  549392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 01:38:41.991700  549392 kubeadm.go:392] StartCluster: {Name:old-k8s-version-978883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-978883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:38:41.991801  549392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 01:38:41.991852  549392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 01:38:42.033921  549392 cri.go:89] found id: ""
	I0730 01:38:42.034005  549392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 01:38:42.045398  549392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 01:38:42.056608  549392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 01:38:42.067859  549392 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 01:38:42.067881  549392 kubeadm.go:157] found existing configuration files:
	
	I0730 01:38:42.067937  549392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 01:38:42.078577  549392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 01:38:42.078646  549392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 01:38:42.088899  549392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 01:38:42.099468  549392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 01:38:42.099534  549392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 01:38:42.109049  549392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 01:38:42.118482  549392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 01:38:42.118546  549392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 01:38:42.131881  549392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 01:38:42.144257  549392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 01:38:42.144369  549392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 01:38:42.157818  549392 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0730 01:38:42.311260  549392 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0730 01:38:42.311353  549392 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 01:38:42.496156  549392 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 01:38:42.496378  549392 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 01:38:42.496545  549392 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 01:38:42.707147  549392 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 01:38:40.941067  550170 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0730 01:38:40.941382  550170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:38:40.941468  550170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:38:40.957077  550170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I0730 01:38:40.957660  550170 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:38:40.958298  550170 main.go:141] libmachine: Using API Version  1
	I0730 01:38:40.958337  550170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:38:40.958750  550170 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:38:40.959030  550170 main.go:141] libmachine: (no-preload-123365) Calling .GetMachineName
	I0730 01:38:40.959216  550170 main.go:141] libmachine: (no-preload-123365) Calling .DriverName
	I0730 01:38:40.959401  550170 start.go:159] libmachine.API.Create for "no-preload-123365" (driver="kvm2")
	I0730 01:38:40.959427  550170 client.go:168] LocalClient.Create starting
	I0730 01:38:40.959467  550170 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem
	I0730 01:38:40.959512  550170 main.go:141] libmachine: Decoding PEM data...
	I0730 01:38:40.959533  550170 main.go:141] libmachine: Parsing certificate...
	I0730 01:38:40.959606  550170 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem
	I0730 01:38:40.959638  550170 main.go:141] libmachine: Decoding PEM data...
	I0730 01:38:40.959654  550170 main.go:141] libmachine: Parsing certificate...
	I0730 01:38:40.959679  550170 main.go:141] libmachine: Running pre-create checks...
	I0730 01:38:40.959691  550170 main.go:141] libmachine: (no-preload-123365) Calling .PreCreateCheck
	I0730 01:38:40.960073  550170 main.go:141] libmachine: (no-preload-123365) Calling .GetConfigRaw
	I0730 01:38:40.960561  550170 main.go:141] libmachine: Creating machine...
	I0730 01:38:40.960582  550170 main.go:141] libmachine: (no-preload-123365) Calling .Create
	I0730 01:38:40.960757  550170 main.go:141] libmachine: (no-preload-123365) Creating KVM machine...
	I0730 01:38:40.962088  550170 main.go:141] libmachine: (no-preload-123365) DBG | found existing default KVM network
	I0730 01:38:40.963813  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:40.963649  550287 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000280240}
	I0730 01:38:40.963853  550170 main.go:141] libmachine: (no-preload-123365) DBG | created network xml: 
	I0730 01:38:40.963870  550170 main.go:141] libmachine: (no-preload-123365) DBG | <network>
	I0730 01:38:40.963880  550170 main.go:141] libmachine: (no-preload-123365) DBG |   <name>mk-no-preload-123365</name>
	I0730 01:38:40.963888  550170 main.go:141] libmachine: (no-preload-123365) DBG |   <dns enable='no'/>
	I0730 01:38:40.963897  550170 main.go:141] libmachine: (no-preload-123365) DBG |   
	I0730 01:38:40.963907  550170 main.go:141] libmachine: (no-preload-123365) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0730 01:38:40.963918  550170 main.go:141] libmachine: (no-preload-123365) DBG |     <dhcp>
	I0730 01:38:40.963932  550170 main.go:141] libmachine: (no-preload-123365) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0730 01:38:40.963949  550170 main.go:141] libmachine: (no-preload-123365) DBG |     </dhcp>
	I0730 01:38:40.963961  550170 main.go:141] libmachine: (no-preload-123365) DBG |   </ip>
	I0730 01:38:40.963973  550170 main.go:141] libmachine: (no-preload-123365) DBG |   
	I0730 01:38:40.963983  550170 main.go:141] libmachine: (no-preload-123365) DBG | </network>
	I0730 01:38:40.963997  550170 main.go:141] libmachine: (no-preload-123365) DBG | 
	I0730 01:38:41.056157  550170 main.go:141] libmachine: (no-preload-123365) DBG | trying to create private KVM network mk-no-preload-123365 192.168.39.0/24...
	I0730 01:38:41.135418  550170 main.go:141] libmachine: (no-preload-123365) DBG | private KVM network mk-no-preload-123365 192.168.39.0/24 created
	I0730 01:38:41.135450  550170 main.go:141] libmachine: (no-preload-123365) Setting up store path in /home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365 ...
	I0730 01:38:41.135463  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:41.135363  550287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:38:41.135476  550170 main.go:141] libmachine: (no-preload-123365) Building disk image from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 01:38:41.135504  550170 main.go:141] libmachine: (no-preload-123365) Downloading /home/jenkins/minikube-integration/19346-495103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0730 01:38:41.411147  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:41.411017  550287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365/id_rsa...
	I0730 01:38:41.675344  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:41.675208  550287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365/no-preload-123365.rawdisk...
	I0730 01:38:41.675385  550170 main.go:141] libmachine: (no-preload-123365) DBG | Writing magic tar header
	I0730 01:38:41.675407  550170 main.go:141] libmachine: (no-preload-123365) DBG | Writing SSH key tar header
	I0730 01:38:41.675418  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:41.675335  550287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365 ...
	I0730 01:38:41.675513  550170 main.go:141] libmachine: (no-preload-123365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365
	I0730 01:38:41.675551  550170 main.go:141] libmachine: (no-preload-123365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube/machines
	I0730 01:38:41.675561  550170 main.go:141] libmachine: (no-preload-123365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 01:38:41.675574  550170 main.go:141] libmachine: (no-preload-123365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19346-495103
	I0730 01:38:41.675583  550170 main.go:141] libmachine: (no-preload-123365) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0730 01:38:41.675593  550170 main.go:141] libmachine: (no-preload-123365) DBG | Checking permissions on dir: /home/jenkins
	I0730 01:38:41.675603  550170 main.go:141] libmachine: (no-preload-123365) DBG | Checking permissions on dir: /home
	I0730 01:38:41.675618  550170 main.go:141] libmachine: (no-preload-123365) DBG | Skipping /home - not owner
	I0730 01:38:41.675631  550170 main.go:141] libmachine: (no-preload-123365) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365 (perms=drwx------)
	I0730 01:38:41.675674  550170 main.go:141] libmachine: (no-preload-123365) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube/machines (perms=drwxr-xr-x)
	I0730 01:38:41.675704  550170 main.go:141] libmachine: (no-preload-123365) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103/.minikube (perms=drwxr-xr-x)
	I0730 01:38:41.675718  550170 main.go:141] libmachine: (no-preload-123365) Setting executable bit set on /home/jenkins/minikube-integration/19346-495103 (perms=drwxrwxr-x)
	I0730 01:38:41.675727  550170 main.go:141] libmachine: (no-preload-123365) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0730 01:38:41.675739  550170 main.go:141] libmachine: (no-preload-123365) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0730 01:38:41.675747  550170 main.go:141] libmachine: (no-preload-123365) Creating domain...
	I0730 01:38:41.677050  550170 main.go:141] libmachine: (no-preload-123365) define libvirt domain using xml: 
	I0730 01:38:41.677082  550170 main.go:141] libmachine: (no-preload-123365) <domain type='kvm'>
	I0730 01:38:41.677095  550170 main.go:141] libmachine: (no-preload-123365)   <name>no-preload-123365</name>
	I0730 01:38:41.677108  550170 main.go:141] libmachine: (no-preload-123365)   <memory unit='MiB'>2200</memory>
	I0730 01:38:41.677116  550170 main.go:141] libmachine: (no-preload-123365)   <vcpu>2</vcpu>
	I0730 01:38:41.677123  550170 main.go:141] libmachine: (no-preload-123365)   <features>
	I0730 01:38:41.677132  550170 main.go:141] libmachine: (no-preload-123365)     <acpi/>
	I0730 01:38:41.677142  550170 main.go:141] libmachine: (no-preload-123365)     <apic/>
	I0730 01:38:41.677151  550170 main.go:141] libmachine: (no-preload-123365)     <pae/>
	I0730 01:38:41.677164  550170 main.go:141] libmachine: (no-preload-123365)     
	I0730 01:38:41.677175  550170 main.go:141] libmachine: (no-preload-123365)   </features>
	I0730 01:38:41.677186  550170 main.go:141] libmachine: (no-preload-123365)   <cpu mode='host-passthrough'>
	I0730 01:38:41.677196  550170 main.go:141] libmachine: (no-preload-123365)   
	I0730 01:38:41.677204  550170 main.go:141] libmachine: (no-preload-123365)   </cpu>
	I0730 01:38:41.677212  550170 main.go:141] libmachine: (no-preload-123365)   <os>
	I0730 01:38:41.677223  550170 main.go:141] libmachine: (no-preload-123365)     <type>hvm</type>
	I0730 01:38:41.677234  550170 main.go:141] libmachine: (no-preload-123365)     <boot dev='cdrom'/>
	I0730 01:38:41.677244  550170 main.go:141] libmachine: (no-preload-123365)     <boot dev='hd'/>
	I0730 01:38:41.677253  550170 main.go:141] libmachine: (no-preload-123365)     <bootmenu enable='no'/>
	I0730 01:38:41.677263  550170 main.go:141] libmachine: (no-preload-123365)   </os>
	I0730 01:38:41.677272  550170 main.go:141] libmachine: (no-preload-123365)   <devices>
	I0730 01:38:41.677283  550170 main.go:141] libmachine: (no-preload-123365)     <disk type='file' device='cdrom'>
	I0730 01:38:41.677299  550170 main.go:141] libmachine: (no-preload-123365)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365/boot2docker.iso'/>
	I0730 01:38:41.677310  550170 main.go:141] libmachine: (no-preload-123365)       <target dev='hdc' bus='scsi'/>
	I0730 01:38:41.677319  550170 main.go:141] libmachine: (no-preload-123365)       <readonly/>
	I0730 01:38:41.677328  550170 main.go:141] libmachine: (no-preload-123365)     </disk>
	I0730 01:38:41.677342  550170 main.go:141] libmachine: (no-preload-123365)     <disk type='file' device='disk'>
	I0730 01:38:41.677360  550170 main.go:141] libmachine: (no-preload-123365)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0730 01:38:41.677377  550170 main.go:141] libmachine: (no-preload-123365)       <source file='/home/jenkins/minikube-integration/19346-495103/.minikube/machines/no-preload-123365/no-preload-123365.rawdisk'/>
	I0730 01:38:41.677390  550170 main.go:141] libmachine: (no-preload-123365)       <target dev='hda' bus='virtio'/>
	I0730 01:38:41.677399  550170 main.go:141] libmachine: (no-preload-123365)     </disk>
	I0730 01:38:41.677411  550170 main.go:141] libmachine: (no-preload-123365)     <interface type='network'>
	I0730 01:38:41.677425  550170 main.go:141] libmachine: (no-preload-123365)       <source network='mk-no-preload-123365'/>
	I0730 01:38:41.677435  550170 main.go:141] libmachine: (no-preload-123365)       <model type='virtio'/>
	I0730 01:38:41.677448  550170 main.go:141] libmachine: (no-preload-123365)     </interface>
	I0730 01:38:41.677466  550170 main.go:141] libmachine: (no-preload-123365)     <interface type='network'>
	I0730 01:38:41.677478  550170 main.go:141] libmachine: (no-preload-123365)       <source network='default'/>
	I0730 01:38:41.677486  550170 main.go:141] libmachine: (no-preload-123365)       <model type='virtio'/>
	I0730 01:38:41.677497  550170 main.go:141] libmachine: (no-preload-123365)     </interface>
	I0730 01:38:41.677507  550170 main.go:141] libmachine: (no-preload-123365)     <serial type='pty'>
	I0730 01:38:41.677516  550170 main.go:141] libmachine: (no-preload-123365)       <target port='0'/>
	I0730 01:38:41.677525  550170 main.go:141] libmachine: (no-preload-123365)     </serial>
	I0730 01:38:41.677534  550170 main.go:141] libmachine: (no-preload-123365)     <console type='pty'>
	I0730 01:38:41.677545  550170 main.go:141] libmachine: (no-preload-123365)       <target type='serial' port='0'/>
	I0730 01:38:41.677553  550170 main.go:141] libmachine: (no-preload-123365)     </console>
	I0730 01:38:41.677560  550170 main.go:141] libmachine: (no-preload-123365)     <rng model='virtio'>
	I0730 01:38:41.677572  550170 main.go:141] libmachine: (no-preload-123365)       <backend model='random'>/dev/random</backend>
	I0730 01:38:41.677581  550170 main.go:141] libmachine: (no-preload-123365)     </rng>
	I0730 01:38:41.677588  550170 main.go:141] libmachine: (no-preload-123365)     
	I0730 01:38:41.677598  550170 main.go:141] libmachine: (no-preload-123365)     
	I0730 01:38:41.677606  550170 main.go:141] libmachine: (no-preload-123365)   </devices>
	I0730 01:38:41.677616  550170 main.go:141] libmachine: (no-preload-123365) </domain>
	I0730 01:38:41.677626  550170 main.go:141] libmachine: (no-preload-123365) 
	I0730 01:38:41.907008  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c7:91:a3 in network default
	I0730 01:38:41.907842  550170 main.go:141] libmachine: (no-preload-123365) Ensuring networks are active...
	I0730 01:38:41.907874  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:41.908759  550170 main.go:141] libmachine: (no-preload-123365) Ensuring network default is active
	I0730 01:38:41.909270  550170 main.go:141] libmachine: (no-preload-123365) Ensuring network mk-no-preload-123365 is active
	I0730 01:38:41.909978  550170 main.go:141] libmachine: (no-preload-123365) Getting domain xml...
	I0730 01:38:41.910895  550170 main.go:141] libmachine: (no-preload-123365) Creating domain...
	I0730 01:38:40.539893  549668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 01:38:40.539925  549668 machine.go:97] duration metric: took 7.011247119s to provisionDockerMachine
	I0730 01:38:40.539940  549668 start.go:293] postStartSetup for "kubernetes-upgrade-599146" (driver="kvm2")
	I0730 01:38:40.539956  549668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 01:38:40.539990  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:38:40.540549  549668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 01:38:40.540586  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:40.543720  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.544270  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:40.544300  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.544549  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:40.544784  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:40.544978  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:40.545151  549668 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:38:40.642514  549668 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 01:38:40.646730  549668 info.go:137] Remote host: Buildroot 2023.02.9
	I0730 01:38:40.646765  549668 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/addons for local assets ...
	I0730 01:38:40.646864  549668 filesync.go:126] Scanning /home/jenkins/minikube-integration/19346-495103/.minikube/files for local assets ...
	I0730 01:38:40.646971  549668 filesync.go:149] local asset: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem -> 5023842.pem in /etc/ssl/certs
	I0730 01:38:40.647132  549668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 01:38:40.656469  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:38:40.683163  549668 start.go:296] duration metric: took 143.208441ms for postStartSetup
	I0730 01:38:40.683206  549668 fix.go:56] duration metric: took 7.181668073s for fixHost
	I0730 01:38:40.683227  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:40.686272  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.686609  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:40.686640  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.686835  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:40.687090  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:40.687290  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:40.687509  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:40.687733  549668 main.go:141] libmachine: Using SSH client type: native
	I0730 01:38:40.687991  549668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I0730 01:38:40.688006  549668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0730 01:38:40.806196  549668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722303520.794560339
	
	I0730 01:38:40.806232  549668 fix.go:216] guest clock: 1722303520.794560339
	I0730 01:38:40.806241  549668 fix.go:229] Guest: 2024-07-30 01:38:40.794560339 +0000 UTC Remote: 2024-07-30 01:38:40.683210262 +0000 UTC m=+41.003133660 (delta=111.350077ms)
	I0730 01:38:40.806271  549668 fix.go:200] guest clock delta is within tolerance: 111.350077ms
	I0730 01:38:40.806278  549668 start.go:83] releasing machines lock for "kubernetes-upgrade-599146", held for 7.304771433s
	I0730 01:38:40.806315  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:38:40.806646  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetIP
	I0730 01:38:40.809831  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.810291  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:40.810323  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.810497  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:38:40.811089  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:38:40.811261  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .DriverName
	I0730 01:38:40.811423  549668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 01:38:40.811473  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:40.811584  549668 ssh_runner.go:195] Run: cat /version.json
	I0730 01:38:40.811608  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHHostname
	I0730 01:38:40.814238  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.814607  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:40.814653  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.814681  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.814747  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:40.814915  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:40.815091  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:40.815115  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:40.815140  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:40.815234  549668 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:38:40.815281  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHPort
	I0730 01:38:40.815426  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHKeyPath
	I0730 01:38:40.815558  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetSSHUsername
	I0730 01:38:40.815758  549668 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/kubernetes-upgrade-599146/id_rsa Username:docker}
	I0730 01:38:40.894048  549668 ssh_runner.go:195] Run: systemctl --version
	I0730 01:38:40.926931  549668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 01:38:41.091719  549668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0730 01:38:41.100745  549668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0730 01:38:41.100822  549668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 01:38:41.111370  549668 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 01:38:41.111401  549668 start.go:495] detecting cgroup driver to use...
	I0730 01:38:41.111477  549668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 01:38:41.131333  549668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 01:38:41.148495  549668 docker.go:217] disabling cri-docker service (if available) ...
	I0730 01:38:41.148561  549668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 01:38:41.163713  549668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 01:38:41.179211  549668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 01:38:41.340072  549668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 01:38:41.509708  549668 docker.go:233] disabling docker service ...
	I0730 01:38:41.509781  549668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 01:38:41.546821  549668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 01:38:41.564651  549668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 01:38:41.771143  549668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 01:38:41.953867  549668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 01:38:41.983499  549668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 01:38:42.002657  549668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0730 01:38:42.002729  549668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:42.027816  549668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 01:38:42.027914  549668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:42.040184  549668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:42.051358  549668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:42.063397  549668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 01:38:42.075315  549668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:42.094232  549668 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:42.137662  549668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 01:38:42.161985  549668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 01:38:42.185485  549668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 01:38:42.268332  549668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:38:42.528201  549668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 01:38:43.812876  549668 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.284630508s)
	I0730 01:38:43.812914  549668 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 01:38:43.812971  549668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 01:38:43.818190  549668 start.go:563] Will wait 60s for crictl version
	I0730 01:38:43.818257  549668 ssh_runner.go:195] Run: which crictl
	I0730 01:38:43.822888  549668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 01:38:43.870196  549668 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0730 01:38:43.870295  549668 ssh_runner.go:195] Run: crio --version
	I0730 01:38:43.899373  549668 ssh_runner.go:195] Run: crio --version
	I0730 01:38:43.934640  549668 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0730 01:38:43.935982  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) Calling .GetIP
	I0730 01:38:43.939078  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:43.939589  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:c0:27", ip: ""} in network mk-kubernetes-upgrade-599146: {Iface:virbr2 ExpiryTime:2024-07-30 02:37:32 +0000 UTC Type:0 Mac:52:54:00:46:c0:27 Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:kubernetes-upgrade-599146 Clientid:01:52:54:00:46:c0:27}
	I0730 01:38:43.939620  549668 main.go:141] libmachine: (kubernetes-upgrade-599146) DBG | domain kubernetes-upgrade-599146 has defined IP address 192.168.50.97 and MAC address 52:54:00:46:c0:27 in network mk-kubernetes-upgrade-599146
	I0730 01:38:43.939837  549668 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0730 01:38:43.944135  549668 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-599146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 01:38:43.944278  549668 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 01:38:43.944342  549668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:38:43.985049  549668 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 01:38:43.985078  549668 crio.go:433] Images already preloaded, skipping extraction
	I0730 01:38:43.985150  549668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 01:38:44.028370  549668 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 01:38:44.028404  549668 cache_images.go:84] Images are preloaded, skipping loading
	I0730 01:38:44.028415  549668 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.31.0-beta.0 crio true true} ...
	I0730 01:38:44.028535  549668 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-599146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 01:38:44.028610  549668 ssh_runner.go:195] Run: crio config
	I0730 01:38:44.089483  549668 cni.go:84] Creating CNI manager for ""
	I0730 01:38:44.089511  549668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 01:38:44.089525  549668 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 01:38:44.089561  549668 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-599146 NodeName:kubernetes-upgrade-599146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 01:38:44.089761  549668 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-599146"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 01:38:44.089843  549668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0730 01:38:44.100697  549668 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 01:38:44.100806  549668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 01:38:44.110947  549668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0730 01:38:44.131055  549668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0730 01:38:44.149786  549668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0730 01:38:44.168058  549668 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I0730 01:38:44.172103  549668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 01:38:44.355016  549668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 01:38:44.373046  549668 certs.go:68] Setting up /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146 for IP: 192.168.50.97
	I0730 01:38:44.373077  549668 certs.go:194] generating shared ca certs ...
	I0730 01:38:44.373098  549668 certs.go:226] acquiring lock for ca certs: {Name:mkfbd4f4db62307e023a16dc0b63f79f65d3d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 01:38:44.373275  549668 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key
	I0730 01:38:44.373315  549668 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key
	I0730 01:38:44.373334  549668 certs.go:256] generating profile certs ...
	I0730 01:38:44.373437  549668 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/client.key
	I0730 01:38:44.373501  549668 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key.04b001d3
	I0730 01:38:44.373560  549668 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.key
	I0730 01:38:44.373709  549668 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem (1338 bytes)
	W0730 01:38:44.373754  549668 certs.go:480] ignoring /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384_empty.pem, impossibly tiny 0 bytes
	I0730 01:38:44.373767  549668 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 01:38:44.373801  549668 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/ca.pem (1082 bytes)
	I0730 01:38:44.373839  549668 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/cert.pem (1123 bytes)
	I0730 01:38:44.373870  549668 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/certs/key.pem (1679 bytes)
	I0730 01:38:44.373925  549668 certs.go:484] found cert: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem (1708 bytes)
	I0730 01:38:44.374745  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 01:38:44.401197  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0730 01:38:44.427545  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 01:38:44.452468  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0730 01:38:44.478535  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0730 01:38:44.504615  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 01:38:44.585538  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 01:38:44.661201  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/kubernetes-upgrade-599146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0730 01:38:42.811747  549392 out.go:204]   - Generating certificates and keys ...
	I0730 01:38:42.811865  549392 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 01:38:42.811975  549392 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 01:38:42.812110  549392 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 01:38:42.984903  549392 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 01:38:43.089013  549392 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 01:38:43.258437  549392 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 01:38:43.721409  549392 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 01:38:43.721604  549392 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-978883] and IPs [192.168.61.3 127.0.0.1 ::1]
	I0730 01:38:43.927301  549392 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 01:38:43.927553  549392 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-978883] and IPs [192.168.61.3 127.0.0.1 ::1]
	I0730 01:38:44.235630  549392 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 01:38:44.412526  549392 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 01:38:44.786883  549392 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 01:38:44.787185  549392 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 01:38:45.094911  549392 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 01:38:45.188999  549392 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 01:38:45.427586  549392 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 01:38:45.607144  549392 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 01:38:45.624916  549392 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 01:38:45.626480  549392 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 01:38:45.626556  549392 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 01:38:45.772361  549392 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 01:38:45.773923  549392 out.go:204]   - Booting up control plane ...
	I0730 01:38:45.774076  549392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 01:38:45.783932  549392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 01:38:45.792103  549392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 01:38:45.793886  549392 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 01:38:45.800931  549392 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0730 01:38:43.800182  550170 main.go:141] libmachine: (no-preload-123365) Waiting to get IP...
	I0730 01:38:43.801271  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:43.801824  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:43.801853  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:43.801802  550287 retry.go:31] will retry after 251.75248ms: waiting for machine to come up
	I0730 01:38:44.055460  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:44.056117  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:44.056149  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:44.056066  550287 retry.go:31] will retry after 247.126558ms: waiting for machine to come up
	I0730 01:38:44.304613  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:44.305198  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:44.305231  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:44.305146  550287 retry.go:31] will retry after 464.739171ms: waiting for machine to come up
	I0730 01:38:44.771697  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:44.772263  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:44.772292  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:44.772226  550287 retry.go:31] will retry after 530.18284ms: waiting for machine to come up
	I0730 01:38:45.304021  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:45.304731  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:45.304761  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:45.304682  550287 retry.go:31] will retry after 651.700798ms: waiting for machine to come up
	I0730 01:38:45.958248  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:45.958975  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:45.959020  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:45.958872  550287 retry.go:31] will retry after 728.212841ms: waiting for machine to come up
	I0730 01:38:46.688306  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:46.688927  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:46.688969  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:46.688864  550287 retry.go:31] will retry after 1.128994721s: waiting for machine to come up
	I0730 01:38:47.819058  550170 main.go:141] libmachine: (no-preload-123365) DBG | domain no-preload-123365 has defined MAC address 52:54:00:c3:a1:77 in network mk-no-preload-123365
	I0730 01:38:47.819507  550170 main.go:141] libmachine: (no-preload-123365) DBG | unable to find current IP address of domain no-preload-123365 in network mk-no-preload-123365
	I0730 01:38:47.819537  550170 main.go:141] libmachine: (no-preload-123365) DBG | I0730 01:38:47.819459  550287 retry.go:31] will retry after 1.046046047s: waiting for machine to come up
	I0730 01:38:44.891348  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 01:38:45.026660  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/certs/502384.pem --> /usr/share/ca-certificates/502384.pem (1338 bytes)
	I0730 01:38:45.153441  549668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/ssl/certs/5023842.pem --> /usr/share/ca-certificates/5023842.pem (1708 bytes)
	I0730 01:38:45.277193  549668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 01:38:45.371483  549668 ssh_runner.go:195] Run: openssl version
	I0730 01:38:45.405254  549668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 01:38:45.481892  549668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:38:45.510105  549668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:38:45.510183  549668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 01:38:45.533497  549668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 01:38:45.577983  549668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/502384.pem && ln -fs /usr/share/ca-certificates/502384.pem /etc/ssl/certs/502384.pem"
	I0730 01:38:45.682141  549668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/502384.pem
	I0730 01:38:45.695477  549668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 00:23 /usr/share/ca-certificates/502384.pem
	I0730 01:38:45.695558  549668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/502384.pem
	I0730 01:38:45.709965  549668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/502384.pem /etc/ssl/certs/51391683.0"
	I0730 01:38:45.754154  549668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5023842.pem && ln -fs /usr/share/ca-certificates/5023842.pem /etc/ssl/certs/5023842.pem"
	I0730 01:38:45.792424  549668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5023842.pem
	I0730 01:38:45.809619  549668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 00:23 /usr/share/ca-certificates/5023842.pem
	I0730 01:38:45.809730  549668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5023842.pem
	I0730 01:38:45.823528  549668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5023842.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 01:38:45.874336  549668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 01:38:45.889281  549668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 01:38:45.906511  549668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 01:38:45.916375  549668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 01:38:45.933923  549668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 01:38:45.947899  549668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 01:38:45.955207  549668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 01:38:45.963456  549668 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-599146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-599146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 01:38:45.963586  549668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 01:38:45.963694  549668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 01:38:46.061982  549668 cri.go:89] found id: "17ab4ece67bdc282fdbe90309427d51c6fa265b22df44f091e4e13100eb77730"
	I0730 01:38:46.062015  549668 cri.go:89] found id: "2502e1323115fb4630f99639626e453231ed0a2c6ae6e214cade910d00cf8978"
	I0730 01:38:46.062021  549668 cri.go:89] found id: "6cbfc740975da61098e70415bad5df14b8278d8038e2d61518c4a717f8bd444b"
	I0730 01:38:46.062030  549668 cri.go:89] found id: "b86621675f9ee3aed8f14a73503470a682bddfb4543eec2d991b45d4465639f0"
	I0730 01:38:46.062034  549668 cri.go:89] found id: "9599ca3f6899b2f53666d95eefcd1c4ceaaee6f48806e84e7e8b4f16d0caa50c"
	I0730 01:38:46.062038  549668 cri.go:89] found id: "e440928c918279576a970515e001133007787289109b50fc09d79101c059d3da"
	I0730 01:38:46.062042  549668 cri.go:89] found id: "1188baed8c08e5d2cfdedff9ff069549910b7f9e59c1e9805a6f7441b7c1f760"
	I0730 01:38:46.062046  549668 cri.go:89] found id: "48fb7d7b729a1c011fd93ad1a0e521dddcfe5a383422ad4bcddbd7ed8d7c8a5d"
	I0730 01:38:46.062049  549668 cri.go:89] found id: "bb43834abb7b3e97ef2a3a1fa0fef81b4fdb0aed5889139c739b47dc545363e9"
	I0730 01:38:46.062058  549668 cri.go:89] found id: "70c113fd2c4837acd431e9c4479bd257cc88edd2f3feff05cee92c07ed9658e5"
	I0730 01:38:46.062064  549668 cri.go:89] found id: "fad07bc569efeb0a126c2c6a8b71cb252e6c232ac0fa2b38c5f9320c8ffcaa27"
	I0730 01:38:46.062068  549668 cri.go:89] found id: "53330b55f85c2c7c6795b2a7fa99fee2ffd2b7a15dfed5c54363309704316d9a"
	I0730 01:38:46.062072  549668 cri.go:89] found id: "b9837e0788620bf0233f9ee9b9cc1d15e1bde12d75e4a707ae95aa60d41fecaf"
	I0730 01:38:46.062076  549668 cri.go:89] found id: "4bcb6984dedb2819e8b261d52c78e9f74d5596827b0725eace5a86ee45e4d09c"
	I0730 01:38:46.062083  549668 cri.go:89] found id: "e4153219788c725f2dcc9fb0a7a107db86008c1c9a538c18f4f464c2b2149721"
	I0730 01:38:46.062090  549668 cri.go:89] found id: ""
	I0730 01:38:46.062151  549668 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.883119526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=452f4347-6c4a-4091-9f5e-bcaa25566eef name=/runtime.v1.RuntimeService/Version
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.884139396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=383a7fb0-62d9-4229-ac7a-3ae1d2b8b4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.884575951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722303556884552226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=383a7fb0-62d9-4229-ac7a-3ae1d2b8b4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.885299166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ccaf153-11ff-479d-86c3-f7f59922b1ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.885368034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ccaf153-11ff-479d-86c3-f7f59922b1ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.885725887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42bd135ba85b52cdd4971e0fc9190c7e67663fc601aee0360d22d17b7398d278,PodSandboxId:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303553274656646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb24e56bcad0c7dc5d4aab65448e9811653654ca9bae32272444f9fb88924696,PodSandboxId:1dc981b17390d0b8dc5d33d4c65635ced281b03bcaff60dad46e75f68775ec54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722303553247108878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24b823b9bee28f01ab0997369139354a757799e912e95665788d59f2a642b8c,PodSandboxId:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4eb94b25c045e23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722303553261384331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 6e946df0-9911-43ed-85bd-ac0519460c54,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b55511692f765627a91917c7237a934b817527fb608c1cff889d6b9532cd32,PodSandboxId:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d62ba3995533,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722303549408097572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a99f848c6f7bfc7cf6b24a131249e261b5342cad9868b9bc787780c727c081b,PodSandboxId:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a28bfb54bfd728697111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722303549411814777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d51785e68592680a5063fcaa5072f63606b8fa6d2aaec85f07e802420f27231b,PodSandboxId:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd0158308a80ae3333a44b1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722303549393817183,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8473a72d6145d3724543bf9dded2756c775ba18c8f06a104d2afcb166a87c28,PodSandboxId:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303544080556464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-
44a8-b642-5ee53650c4cf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3debc9d7a6be2d513295252e40191653b6ec17bd81f1fc5a318c1e5a05e7406b,PodSandboxId:1e8c4ada0838a008b98fd6d27267948c57f8cd55725ed19d6ce82e6873d31157,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172230354307626197
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ab4ece67bdc282fdbe90309427d51c6fa265b22df44f091e4e13100eb77730,PodSandboxId:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722303525814458332,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-44a8-b642-5ee53650c4cf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2502e1323115fb4630f99639626e453231ed0a2c6ae6e214cade910d00cf8978,PodSandboxId:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722303525718194315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b86621675f9ee3aed8f14a73503470a682bddfb4543eec2d991b45d4465639f0,PodSandboxId:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd01583
08a80ae3333a44b1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722303525150388106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cbfc740975da61098e70415bad5df14b8278d8038e2d61518c4a717f8bd444b,PodSandboxId:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d
62ba3995533,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722303525186443473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9599ca3f6899b2f53666d95eefcd1c4ceaaee6f48806e84e7e8b4f16d0caa50c,PodSandboxId:1dc981b17390d0b8dc5d33d4c65635ce
d281b03bcaff60dad46e75f68775ec54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722303525030445833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e440928c918279576a970515e001133007787289109b50fc09d79101c059d3da,PodSandboxId:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a
28bfb54bfd728697111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722303524951169470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1188baed8c08e5d2cfdedff9ff069549910b7f9e59c1e9805a6f7441b7c1f760,PodSandboxId:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4e
b94b25c045e23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722303524855371628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e946df0-9911-43ed-85bd-ac0519460c54,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fb7d7b729a1c011fd93ad1a0e521dddcfe5a383422ad4bcddbd7ed8d7c8a5d,PodSandboxId:c3ef451e275ffc34271351aa8a83da91a3454a3079343c5d64c944bfd42202fb,Metadata:&ContainerM
etadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722303522304773402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ccaf153-11ff-479d-86c3-f7f59922b1ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.932089427Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f22e951c-305e-423a-84c0-c7787a1e2d65 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.932169232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f22e951c-305e-423a-84c0-c7787a1e2d65 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.933723175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a15138e-2819-435f-b4de-162896876034 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.934070912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722303556934049519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a15138e-2819-435f-b4de-162896876034 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.934928136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f590404a-186e-41c6-9fd3-9ab54d622998 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.934983693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f590404a-186e-41c6-9fd3-9ab54d622998 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.935361597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42bd135ba85b52cdd4971e0fc9190c7e67663fc601aee0360d22d17b7398d278,PodSandboxId:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303553274656646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb24e56bcad0c7dc5d4aab65448e9811653654ca9bae32272444f9fb88924696,PodSandboxId:1dc981b17390d0b8dc5d33d4c65635ced281b03bcaff60dad46e75f68775ec54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722303553247108878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24b823b9bee28f01ab0997369139354a757799e912e95665788d59f2a642b8c,PodSandboxId:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4eb94b25c045e23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722303553261384331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 6e946df0-9911-43ed-85bd-ac0519460c54,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b55511692f765627a91917c7237a934b817527fb608c1cff889d6b9532cd32,PodSandboxId:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d62ba3995533,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722303549408097572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a99f848c6f7bfc7cf6b24a131249e261b5342cad9868b9bc787780c727c081b,PodSandboxId:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a28bfb54bfd728697111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722303549411814777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d51785e68592680a5063fcaa5072f63606b8fa6d2aaec85f07e802420f27231b,PodSandboxId:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd0158308a80ae3333a44b1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722303549393817183,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8473a72d6145d3724543bf9dded2756c775ba18c8f06a104d2afcb166a87c28,PodSandboxId:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303544080556464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-
44a8-b642-5ee53650c4cf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3debc9d7a6be2d513295252e40191653b6ec17bd81f1fc5a318c1e5a05e7406b,PodSandboxId:1e8c4ada0838a008b98fd6d27267948c57f8cd55725ed19d6ce82e6873d31157,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172230354307626197
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ab4ece67bdc282fdbe90309427d51c6fa265b22df44f091e4e13100eb77730,PodSandboxId:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722303525814458332,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-44a8-b642-5ee53650c4cf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2502e1323115fb4630f99639626e453231ed0a2c6ae6e214cade910d00cf8978,PodSandboxId:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722303525718194315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b86621675f9ee3aed8f14a73503470a682bddfb4543eec2d991b45d4465639f0,PodSandboxId:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd01583
08a80ae3333a44b1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722303525150388106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cbfc740975da61098e70415bad5df14b8278d8038e2d61518c4a717f8bd444b,PodSandboxId:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d
62ba3995533,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722303525186443473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9599ca3f6899b2f53666d95eefcd1c4ceaaee6f48806e84e7e8b4f16d0caa50c,PodSandboxId:1dc981b17390d0b8dc5d33d4c65635ce
d281b03bcaff60dad46e75f68775ec54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722303525030445833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e440928c918279576a970515e001133007787289109b50fc09d79101c059d3da,PodSandboxId:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a
28bfb54bfd728697111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722303524951169470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1188baed8c08e5d2cfdedff9ff069549910b7f9e59c1e9805a6f7441b7c1f760,PodSandboxId:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4e
b94b25c045e23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722303524855371628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e946df0-9911-43ed-85bd-ac0519460c54,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fb7d7b729a1c011fd93ad1a0e521dddcfe5a383422ad4bcddbd7ed8d7c8a5d,PodSandboxId:c3ef451e275ffc34271351aa8a83da91a3454a3079343c5d64c944bfd42202fb,Metadata:&ContainerM
etadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722303522304773402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f590404a-186e-41c6-9fd3-9ab54d622998 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.965032684Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37b1089d-7c1f-48c0-9c1f-5284eaa6d42f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.965256429Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd0158308a80ae3333a44b1f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-599146,Uid:77cd91755b75dee1a8f9b3bef3a2d0b9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722303524713610494,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.97:8443,kubernetes.io/config.hash: 77cd91755b75dee1a8f9b3bef3a2d0b9,kubernetes.io/config.seen: 2024-07-30T01:37:48.999667116Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1dc981
b17390d0b8dc5d33d4c65635ced281b03bcaff60dad46e75f68775ec54,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c6a284a1-7348-40df-91dd-bfeed1870e24,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722303524708341720,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"sto
rage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-30T01:38:01.494739639Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d62ba3995533,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-599146,Uid:5ea1848cba1e91c7b32596ac52d7e3f5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722303524707501443,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ea1848cba1e91c7b32596ac52d7e3f5,kubernet
es.io/config.seen: 2024-07-30T01:37:48.999671502Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-w4w9w,Uid:63cd5128-15ae-44a8-b642-5ee53650c4cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722303524639627709,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-44a8-b642-5ee53650c4cf,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:38:01.553948517Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a28bfb54bfd728697111,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-599146,Uid:889e7e701b04272806b1df07b23ce51b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:
1722303524615491572,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 889e7e701b04272806b1df07b23ce51b,kubernetes.io/config.seen: 2024-07-30T01:37:48.999672588Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4eb94b25c045e23,Metadata:&PodSandboxMetadata{Name:kube-proxy-5t9cf,Uid:6e946df0-9911-43ed-85bd-ac0519460c54,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722303524591209277,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e946df0-9911-43ed-85bd-ac0519460c54,k8s-app: kube-proxy,pod-template-genera
tion: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:38:01.564382657Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-fpzxs,Uid:ada4e60d-74fd-4c35-82c4-6b0e65f39477,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722303524587394683,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-30T01:38:01.605579855Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e8c4ada0838a008b98fd6d27267948c57f8cd55725ed19d6ce82e6873d31157,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-599146,Uid:c7ae7c6bd7f9fd3eb3f32b7eef6cb383,Namespace:kube-system,Attem
pt:2,},State:SANDBOX_READY,CreatedAt:1722303524559836357,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.97:2379,kubernetes.io/config.hash: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,kubernetes.io/config.seen: 2024-07-30T01:37:49.048615937Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=37b1089d-7c1f-48c0-9c1f-5284eaa6d42f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.966011204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86be078d-eed8-478e-844b-990a5c0160f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.966081997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86be078d-eed8-478e-844b-990a5c0160f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.966288212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42bd135ba85b52cdd4971e0fc9190c7e67663fc601aee0360d22d17b7398d278,PodSandboxId:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303553274656646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb24e56bcad0c7dc5d4aab65448e9811653654ca9bae32272444f9fb88924696,PodSandboxId:1dc981b17390d0b8dc5d33d4c65635ced281b03bcaff60dad46e75f68775ec54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722303553247108878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24b823b9bee28f01ab0997369139354a757799e912e95665788d59f2a642b8c,PodSandboxId:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4eb94b25c045e23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722303553261384331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 6e946df0-9911-43ed-85bd-ac0519460c54,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b55511692f765627a91917c7237a934b817527fb608c1cff889d6b9532cd32,PodSandboxId:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d62ba3995533,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722303549408097572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a99f848c6f7bfc7cf6b24a131249e261b5342cad9868b9bc787780c727c081b,PodSandboxId:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a28bfb54bfd728697111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722303549411814777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d51785e68592680a5063fcaa5072f63606b8fa6d2aaec85f07e802420f27231b,PodSandboxId:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd0158308a80ae3333a44b1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722303549393817183,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8473a72d6145d3724543bf9dded2756c775ba18c8f06a104d2afcb166a87c28,PodSandboxId:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303544080556464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-
44a8-b642-5ee53650c4cf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3debc9d7a6be2d513295252e40191653b6ec17bd81f1fc5a318c1e5a05e7406b,PodSandboxId:1e8c4ada0838a008b98fd6d27267948c57f8cd55725ed19d6ce82e6873d31157,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172230354307626197
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86be078d-eed8-478e-844b-990a5c0160f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.978549561Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ab2c3a4-6682-46e6-bb3f-679b0161b2f0 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.978618462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ab2c3a4-6682-46e6-bb3f-679b0161b2f0 name=/runtime.v1.RuntimeService/Version
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.982617798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ab9ff9c-fd99-4fbf-bbac-e491b2406ff8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.983053348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722303556983030286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ab9ff9c-fd99-4fbf-bbac-e491b2406ff8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.983975025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91983bcc-f393-492c-be74-8194033c5924 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.984082843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91983bcc-f393-492c-be74-8194033c5924 name=/runtime.v1.RuntimeService/ListContainers
	Jul 30 01:39:16 kubernetes-upgrade-599146 crio[2664]: time="2024-07-30 01:39:16.984576858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42bd135ba85b52cdd4971e0fc9190c7e67663fc601aee0360d22d17b7398d278,PodSandboxId:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303553274656646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb24e56bcad0c7dc5d4aab65448e9811653654ca9bae32272444f9fb88924696,PodSandboxId:1dc981b17390d0b8dc5d33d4c65635ced281b03bcaff60dad46e75f68775ec54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722303553247108878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a24b823b9bee28f01ab0997369139354a757799e912e95665788d59f2a642b8c,PodSandboxId:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4eb94b25c045e23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722303553261384331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 6e946df0-9911-43ed-85bd-ac0519460c54,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b55511692f765627a91917c7237a934b817527fb608c1cff889d6b9532cd32,PodSandboxId:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d62ba3995533,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722303549408097572,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a99f848c6f7bfc7cf6b24a131249e261b5342cad9868b9bc787780c727c081b,PodSandboxId:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a28bfb54bfd728697111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722303549411814777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d51785e68592680a5063fcaa5072f63606b8fa6d2aaec85f07e802420f27231b,PodSandboxId:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd0158308a80ae3333a44b1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722303549393817183,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8473a72d6145d3724543bf9dded2756c775ba18c8f06a104d2afcb166a87c28,PodSandboxId:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722303544080556464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-
44a8-b642-5ee53650c4cf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3debc9d7a6be2d513295252e40191653b6ec17bd81f1fc5a318c1e5a05e7406b,PodSandboxId:1e8c4ada0838a008b98fd6d27267948c57f8cd55725ed19d6ce82e6873d31157,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:172230354307626197
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ab4ece67bdc282fdbe90309427d51c6fa265b22df44f091e4e13100eb77730,PodSandboxId:ff6bf06378be9d53e0f409b0826df28910a0cdb200a6cd79ac49fffcd3db443d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722303525814458332,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w4w9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63cd5128-15ae-44a8-b642-5ee53650c4cf,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2502e1323115fb4630f99639626e453231ed0a2c6ae6e214cade910d00cf8978,PodSandboxId:14395ee050a1d84b4a41117c5bd98e5864a5f47f9066e8cada0ff560b991efcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722303525718194315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fpzxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada4e60d-74fd-4c35-82c4-6b0e65f39477,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b86621675f9ee3aed8f14a73503470a682bddfb4543eec2d991b45d4465639f0,PodSandboxId:1d9cfbc5e2f1c28c2e1fb3a07011b7dc5e8aa1c0dd01583
08a80ae3333a44b1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722303525150388106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77cd91755b75dee1a8f9b3bef3a2d0b9,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cbfc740975da61098e70415bad5df14b8278d8038e2d61518c4a717f8bd444b,PodSandboxId:c060ad259b7901f15ece358f738bb538d7b52a29a4cc9c47ba03d
62ba3995533,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722303525186443473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ea1848cba1e91c7b32596ac52d7e3f5,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9599ca3f6899b2f53666d95eefcd1c4ceaaee6f48806e84e7e8b4f16d0caa50c,PodSandboxId:1dc981b17390d0b8dc5d33d4c65635ce
d281b03bcaff60dad46e75f68775ec54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722303525030445833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a284a1-7348-40df-91dd-bfeed1870e24,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e440928c918279576a970515e001133007787289109b50fc09d79101c059d3da,PodSandboxId:2416d793e78d9d8a7afe086bdbbb3f9502a984470849a
28bfb54bfd728697111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722303524951169470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 889e7e701b04272806b1df07b23ce51b,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1188baed8c08e5d2cfdedff9ff069549910b7f9e59c1e9805a6f7441b7c1f760,PodSandboxId:9d3088cc4ff8dc28152370a41b298df62a25a285c8c6a5f9f4e
b94b25c045e23,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722303524855371628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5t9cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e946df0-9911-43ed-85bd-ac0519460c54,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48fb7d7b729a1c011fd93ad1a0e521dddcfe5a383422ad4bcddbd7ed8d7c8a5d,PodSandboxId:c3ef451e275ffc34271351aa8a83da91a3454a3079343c5d64c944bfd42202fb,Metadata:&ContainerM
etadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722303522304773402,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ae7c6bd7f9fd3eb3f32b7eef6cb383,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91983bcc-f393-492c-be74-8194033c5924 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	42bd135ba85b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   14395ee050a1d       coredns-5cfdc65f69-fpzxs
	a24b823b9bee2       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago       Running             kube-proxy                2                   9d3088cc4ff8d       kube-proxy-5t9cf
	cb24e56bcad0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   1dc981b17390d       storage-provisioner
	9a99f848c6f7b       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   2416d793e78d9       kube-scheduler-kubernetes-upgrade-599146
	93b55511692f7       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   c060ad259b790       kube-controller-manager-kubernetes-upgrade-599146
	d51785e685926       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   1d9cfbc5e2f1c       kube-apiserver-kubernetes-upgrade-599146
	c8473a72d6145       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 seconds ago      Running             coredns                   2                   ff6bf06378be9       coredns-5cfdc65f69-w4w9w
	3debc9d7a6be2       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   13 seconds ago      Running             etcd                      2                   1e8c4ada0838a       etcd-kubernetes-upgrade-599146
	17ab4ece67bdc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   31 seconds ago      Exited              coredns                   1                   ff6bf06378be9       coredns-5cfdc65f69-w4w9w
	2502e1323115f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   31 seconds ago      Exited              coredns                   1                   14395ee050a1d       coredns-5cfdc65f69-fpzxs
	6cbfc740975da       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   31 seconds ago      Exited              kube-controller-manager   1                   c060ad259b790       kube-controller-manager-kubernetes-upgrade-599146
	b86621675f9ee       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   31 seconds ago      Exited              kube-apiserver            1                   1d9cfbc5e2f1c       kube-apiserver-kubernetes-upgrade-599146
	9599ca3f6899b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   32 seconds ago      Exited              storage-provisioner       2                   1dc981b17390d       storage-provisioner
	e440928c91827       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   32 seconds ago      Exited              kube-scheduler            1                   2416d793e78d9       kube-scheduler-kubernetes-upgrade-599146
	1188baed8c08e       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   32 seconds ago      Exited              kube-proxy                1                   9d3088cc4ff8d       kube-proxy-5t9cf
	48fb7d7b729a1       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   34 seconds ago      Exited              etcd                      1                   c3ef451e275ff       etcd-kubernetes-upgrade-599146
	
	
	==> coredns [17ab4ece67bdc282fdbe90309427d51c6fa265b22df44f091e4e13100eb77730] <==
	
	
	==> coredns [2502e1323115fb4630f99639626e453231ed0a2c6ae6e214cade910d00cf8978] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [42bd135ba85b52cdd4971e0fc9190c7e67663fc601aee0360d22d17b7398d278] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c8473a72d6145d3724543bf9dded2756c775ba18c8f06a104d2afcb166a87c28] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51934->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51934->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51914->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51914->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51928->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:51928->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-599146
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-599146
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 01:37:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-599146
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 01:39:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 01:39:12 +0000   Tue, 30 Jul 2024 01:37:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 01:39:12 +0000   Tue, 30 Jul 2024 01:37:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 01:39:12 +0000   Tue, 30 Jul 2024 01:37:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 01:39:12 +0000   Tue, 30 Jul 2024 01:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.97
	  Hostname:    kubernetes-upgrade-599146
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 497bf3aae4da440ba505416821b153aa
	  System UUID:                497bf3aa-e4da-440b-a505-416821b153aa
	  Boot ID:                    c824bd82-5a4c-4d47-8f96-284d477280f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-fpzxs                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     76s
	  kube-system                 coredns-5cfdc65f69-w4w9w                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     76s
	  kube-system                 etcd-kubernetes-upgrade-599146                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         79s
	  kube-system                 kube-apiserver-kubernetes-upgrade-599146             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-599146    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-5t9cf                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-kubernetes-upgrade-599146             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  87s (x8 over 88s)  kubelet          Node kubernetes-upgrade-599146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 88s)  kubelet          Node kubernetes-upgrade-599146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 88s)  kubelet          Node kubernetes-upgrade-599146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     76s                cidrAllocator    Node kubernetes-upgrade-599146 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           76s                node-controller  Node kubernetes-upgrade-599146 event: Registered Node kubernetes-upgrade-599146 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-599146 event: Registered Node kubernetes-upgrade-599146 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.346042] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.063949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066069] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.177974] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.139281] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.274513] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +3.933905] systemd-fstab-generator[729]: Ignoring "noauto" option for root device
	[  +1.976857] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[  +0.061060] kauditd_printk_skb: 158 callbacks suppressed
	[  +9.854673] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.085425] kauditd_printk_skb: 69 callbacks suppressed
	[Jul30 01:38] kauditd_printk_skb: 110 callbacks suppressed
	[ +36.911847] systemd-fstab-generator[2249]: Ignoring "noauto" option for root device
	[  +0.150272] systemd-fstab-generator[2261]: Ignoring "noauto" option for root device
	[  +0.242021] systemd-fstab-generator[2321]: Ignoring "noauto" option for root device
	[  +0.199609] systemd-fstab-generator[2385]: Ignoring "noauto" option for root device
	[  +0.528972] systemd-fstab-generator[2512]: Ignoring "noauto" option for root device
	[  +1.849756] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +2.496137] kauditd_printk_skb: 267 callbacks suppressed
	[Jul30 01:39] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.085985] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.063375] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.827844] systemd-fstab-generator[4287]: Ignoring "noauto" option for root device
	
	
	==> etcd [3debc9d7a6be2d513295252e40191653b6ec17bd81f1fc5a318c1e5a05e7406b] <==
	{"level":"info","ts":"2024-07-30T01:39:03.207192Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-30T01:39:03.207467Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"1f2cc3497df204b1","initial-advertise-peer-urls":["https://192.168.50.97:2380"],"listen-peer-urls":["https://192.168.50.97:2380"],"advertise-client-urls":["https://192.168.50.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-30T01:39:03.207501Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-30T01:39:03.207655Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.97:2380"}
	{"level":"info","ts":"2024-07-30T01:39:03.207677Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.97:2380"}
	{"level":"info","ts":"2024-07-30T01:39:04.693453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-30T01:39:04.69356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-30T01:39:04.693597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 received MsgPreVoteResp from 1f2cc3497df204b1 at term 2"}
	{"level":"info","ts":"2024-07-30T01:39:04.693628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 became candidate at term 3"}
	{"level":"info","ts":"2024-07-30T01:39:04.693652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 received MsgVoteResp from 1f2cc3497df204b1 at term 3"}
	{"level":"info","ts":"2024-07-30T01:39:04.693679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 became leader at term 3"}
	{"level":"info","ts":"2024-07-30T01:39:04.693705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1f2cc3497df204b1 elected leader 1f2cc3497df204b1 at term 3"}
	{"level":"info","ts":"2024-07-30T01:39:04.695622Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:39:04.695593Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1f2cc3497df204b1","local-member-attributes":"{Name:kubernetes-upgrade-599146 ClientURLs:[https://192.168.50.97:2379]}","request-path":"/0/members/1f2cc3497df204b1/attributes","cluster-id":"a36d2e63d2f8b676","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T01:39:04.696737Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-30T01:39:04.696775Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T01:39:04.696801Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T01:39:04.696743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T01:39:04.697577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.97:2379"}
	{"level":"info","ts":"2024-07-30T01:39:04.698195Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-30T01:39:04.698917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-30T01:39:15.656024Z","caller":"traceutil/trace.go:171","msg":"trace[1382461812] linearizableReadLoop","detail":"{readStateIndex:471; appliedIndex:470; }","duration":"118.248628ms","start":"2024-07-30T01:39:15.53776Z","end":"2024-07-30T01:39:15.656009Z","steps":["trace[1382461812] 'read index received'  (duration: 118.051636ms)","trace[1382461812] 'applied index is now lower than readState.Index'  (duration: 196.192µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T01:39:15.656137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.357968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2024-07-30T01:39:15.656184Z","caller":"traceutil/trace.go:171","msg":"trace[1434230089] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:440; }","duration":"118.419488ms","start":"2024-07-30T01:39:15.537757Z","end":"2024-07-30T01:39:15.656176Z","steps":["trace[1434230089] 'agreement among raft nodes before linearized reading'  (duration: 118.337441ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T01:39:15.656369Z","caller":"traceutil/trace.go:171","msg":"trace[625062377] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"131.404772ms","start":"2024-07-30T01:39:15.524954Z","end":"2024-07-30T01:39:15.656359Z","steps":["trace[625062377] 'process raft request'  (duration: 130.949227ms)"],"step_count":1}
	
	
	==> etcd [48fb7d7b729a1c011fd93ad1a0e521dddcfe5a383422ad4bcddbd7ed8d7c8a5d] <==
	
	
	==> kernel <==
	 01:39:17 up 1 min,  0 users,  load average: 1.65, 0.58, 0.21
	Linux kubernetes-upgrade-599146 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b86621675f9ee3aed8f14a73503470a682bddfb4543eec2d991b45d4465639f0] <==
	I0730 01:38:46.161773       1 server.go:142] Version: v1.31.0-beta.0
	I0730 01:38:46.161923       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0730 01:38:46.573506       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:46.576378       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0730 01:38:46.578578       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0730 01:38:46.586172       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 01:38:46.594800       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0730 01:38:46.594901       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0730 01:38:46.595193       1 instance.go:231] Using reconciler: lease
	W0730 01:38:46.596196       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:47.576133       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:47.577604       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:47.596771       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:49.156982       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:49.296513       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:49.324110       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:51.671542       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:51.919976       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:52.272861       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:55.386657       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:55.641149       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:38:56.778342       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:39:02.050524       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:39:02.585980       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0730 01:39:02.707553       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d51785e68592680a5063fcaa5072f63606b8fa6d2aaec85f07e802420f27231b] <==
	I0730 01:39:12.474710       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 01:39:12.474764       1 policy_source.go:224] refreshing policies
	I0730 01:39:12.481509       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 01:39:12.481979       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 01:39:12.492539       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0730 01:39:12.492697       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0730 01:39:12.492723       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0730 01:39:12.495775       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 01:39:12.496214       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 01:39:12.496257       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 01:39:12.496317       1 aggregator.go:171] initial CRD sync complete...
	I0730 01:39:12.496337       1 autoregister_controller.go:144] Starting autoregister controller
	I0730 01:39:12.496342       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 01:39:12.496346       1 cache.go:39] Caches are synced for autoregister controller
	I0730 01:39:12.509029       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0730 01:39:12.533389       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0730 01:39:12.569112       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 01:39:13.400094       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0730 01:39:14.225481       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0730 01:39:14.240056       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0730 01:39:14.303232       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0730 01:39:14.352955       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0730 01:39:14.362211       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0730 01:39:15.524362       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 01:39:17.107209       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6cbfc740975da61098e70415bad5df14b8278d8038e2d61518c4a717f8bd444b] <==
	I0730 01:38:46.300040       1 serving.go:386] Generated self-signed cert in-memory
	I0730 01:38:47.185848       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0730 01:38:47.185964       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:38:47.187357       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0730 01:38:47.187554       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0730 01:38:47.187672       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0730 01:38:47.187798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [93b55511692f765627a91917c7237a934b817527fb608c1cff889d6b9532cd32] <==
	I0730 01:39:16.764637       1 shared_informer.go:320] Caches are synced for taint
	I0730 01:39:16.764796       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0730 01:39:16.764871       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-599146"
	I0730 01:39:16.764914       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0730 01:39:16.775152       1 shared_informer.go:320] Caches are synced for GC
	I0730 01:39:16.797653       1 shared_informer.go:320] Caches are synced for node
	I0730 01:39:16.797783       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0730 01:39:16.797804       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0730 01:39:16.797809       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0730 01:39:16.797815       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0730 01:39:16.797898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-599146"
	I0730 01:39:16.802500       1 shared_informer.go:320] Caches are synced for daemon sets
	I0730 01:39:16.835969       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0730 01:39:16.847125       1 shared_informer.go:320] Caches are synced for persistent volume
	I0730 01:39:16.865800       1 shared_informer.go:320] Caches are synced for PV protection
	I0730 01:39:16.892316       1 shared_informer.go:320] Caches are synced for attach detach
	I0730 01:39:17.015955       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 01:39:17.016021       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0730 01:39:17.061621       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 01:39:17.090089       1 shared_informer.go:320] Caches are synced for endpoint
	I0730 01:39:17.090124       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0730 01:39:17.090186       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-599146"
	I0730 01:39:17.099734       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 01:39:17.100035       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 01:39:17.100047       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1188baed8c08e5d2cfdedff9ff069549910b7f9e59c1e9805a6f7441b7c1f760] <==
	I0730 01:38:45.925030       1 server_linux.go:67] "Using iptables proxy"
	E0730 01:38:46.424459       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0730 01:38:46.608002       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0730 01:38:56.610313       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-599146\": net/http: TLS handshake timeout"
	E0730 01:39:07.299912       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-599146\": dial tcp 192.168.50.97:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.97:54142->192.168.50.97:8443: read: connection reset by peer"
	
	
	==> kube-proxy [a24b823b9bee28f01ab0997369139354a757799e912e95665788d59f2a642b8c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0730 01:39:13.596733       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0730 01:39:13.607805       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.97"]
	E0730 01:39:13.607933       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0730 01:39:13.659173       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0730 01:39:13.659215       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0730 01:39:13.659251       1 server_linux.go:170] "Using iptables Proxier"
	I0730 01:39:13.661630       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0730 01:39:13.661969       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0730 01:39:13.662031       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:39:13.665228       1 config.go:197] "Starting service config controller"
	I0730 01:39:13.665442       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 01:39:13.665569       1 config.go:104] "Starting endpoint slice config controller"
	I0730 01:39:13.665593       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 01:39:13.665617       1 config.go:326] "Starting node config controller"
	I0730 01:39:13.668507       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 01:39:13.765984       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 01:39:13.766037       1 shared_informer.go:320] Caches are synced for service config
	I0730 01:39:13.769560       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9a99f848c6f7bfc7cf6b24a131249e261b5342cad9868b9bc787780c727c081b] <==
	I0730 01:39:10.945234       1 serving.go:386] Generated self-signed cert in-memory
	W0730 01:39:12.474499       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0730 01:39:12.474573       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0730 01:39:12.474584       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0730 01:39:12.474590       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0730 01:39:12.520277       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0730 01:39:12.520822       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 01:39:12.525819       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0730 01:39:12.526624       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0730 01:39:12.527940       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0730 01:39:12.526666       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0730 01:39:12.628713       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e440928c918279576a970515e001133007787289109b50fc09d79101c059d3da] <==
	I0730 01:38:46.778654       1 serving.go:386] Generated self-signed cert in-memory
	W0730 01:38:57.191578       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.50.97:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0730 01:38:57.191609       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0730 01:38:57.191616       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0730 01:39:07.292022       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0730 01:39:07.292061       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0730 01:39:07.292078       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0730 01:39:07.295026       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0730 01:39:07.295113       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	E0730 01:39:07.295159       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.144400    3846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ea1848cba1e91c7b32596ac52d7e3f5-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-599146\" (UID: \"5ea1848cba1e91c7b32596ac52d7e3f5\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-599146"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.144482    3846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ea1848cba1e91c7b32596ac52d7e3f5-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-599146\" (UID: \"5ea1848cba1e91c7b32596ac52d7e3f5\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-599146"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.144514    3846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ea1848cba1e91c7b32596ac52d7e3f5-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-599146\" (UID: \"5ea1848cba1e91c7b32596ac52d7e3f5\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-599146"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.144543    3846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ea1848cba1e91c7b32596ac52d7e3f5-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-599146\" (UID: \"5ea1848cba1e91c7b32596ac52d7e3f5\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-599146"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.250569    3846 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-599146"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: E0730 01:39:09.251483    3846 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.97:8443: connect: connection refused" node="kubernetes-upgrade-599146"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.370273    3846 scope.go:117] "RemoveContainer" containerID="b86621675f9ee3aed8f14a73503470a682bddfb4543eec2d991b45d4465639f0"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.372227    3846 scope.go:117] "RemoveContainer" containerID="6cbfc740975da61098e70415bad5df14b8278d8038e2d61518c4a717f8bd444b"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.373629    3846 scope.go:117] "RemoveContainer" containerID="e440928c918279576a970515e001133007787289109b50fc09d79101c059d3da"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: E0730 01:39:09.544782    3846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-599146?timeout=10s\": dial tcp 192.168.50.97:8443: connect: connection refused" interval="800ms"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:09.653867    3846 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-599146"
	Jul 30 01:39:09 kubernetes-upgrade-599146 kubelet[3846]: E0730 01:39:09.654741    3846 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.97:8443: connect: connection refused" node="kubernetes-upgrade-599146"
	Jul 30 01:39:10 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:10.457091    3846 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-599146"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.541391    3846 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-599146"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.541904    3846 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-599146"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.542061    3846 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.543751    3846 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.915306    3846 apiserver.go:52] "Watching apiserver"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.941006    3846 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.977607    3846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c6a284a1-7348-40df-91dd-bfeed1870e24-tmp\") pod \"storage-provisioner\" (UID: \"c6a284a1-7348-40df-91dd-bfeed1870e24\") " pod="kube-system/storage-provisioner"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.977640    3846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e946df0-9911-43ed-85bd-ac0519460c54-xtables-lock\") pod \"kube-proxy-5t9cf\" (UID: \"6e946df0-9911-43ed-85bd-ac0519460c54\") " pod="kube-system/kube-proxy-5t9cf"
	Jul 30 01:39:12 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:12.977789    3846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e946df0-9911-43ed-85bd-ac0519460c54-lib-modules\") pod \"kube-proxy-5t9cf\" (UID: \"6e946df0-9911-43ed-85bd-ac0519460c54\") " pod="kube-system/kube-proxy-5t9cf"
	Jul 30 01:39:13 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:13.221680    3846 scope.go:117] "RemoveContainer" containerID="1188baed8c08e5d2cfdedff9ff069549910b7f9e59c1e9805a6f7441b7c1f760"
	Jul 30 01:39:13 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:13.222605    3846 scope.go:117] "RemoveContainer" containerID="9599ca3f6899b2f53666d95eefcd1c4ceaaee6f48806e84e7e8b4f16d0caa50c"
	Jul 30 01:39:13 kubernetes-upgrade-599146 kubelet[3846]: I0730 01:39:13.224367    3846 scope.go:117] "RemoveContainer" containerID="2502e1323115fb4630f99639626e453231ed0a2c6ae6e214cade910d00cf8978"
	
	
	==> storage-provisioner [9599ca3f6899b2f53666d95eefcd1c4ceaaee6f48806e84e7e8b4f16d0caa50c] <==
	I0730 01:38:45.734078       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0730 01:38:45.741578       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [cb24e56bcad0c7dc5d4aab65448e9811653654ca9bae32272444f9fb88924696] <==
	I0730 01:39:13.467755       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 01:39:13.488494       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 01:39:13.488558       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 01:39:16.452845  550623 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19346-495103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-599146 -n kubernetes-upgrade-599146
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-599146 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-599146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-599146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-599146: (1.332476812s)
--- FAIL: TestKubernetesUpgrade (449.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
E0730 01:56:10.080901  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
E0730 01:58:42.934164  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
E0730 02:01:10.081764  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
E0730 02:03:25.985023  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
E0730 02:03:42.934561  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.3:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (28m38s)
	TestStartStop (31m11s)
	TestStartStop/group/default-k8s-diff-port (22m20s)
	TestStartStop/group/default-k8s-diff-port/serial (22m20s)
	TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (1m24s)
	TestStartStop/group/embed-certs (24m45s)
	TestStartStop/group/embed-certs/serial (24m45s)
	TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (2m45s)
	TestStartStop/group/no-preload (25m37s)
	TestStartStop/group/no-preload/serial (25m37s)
	TestStartStop/group/no-preload/serial/AddonExistsAfterStop (16s)
	TestStartStop/group/old-k8s-version (26m23s)
	TestStartStop/group/old-k8s-version/serial (26m23s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8m1s)

                                                
                                                
goroutine 6628 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006c8ea0, 0xc000653bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000610288, {0x49d1100, 0x2b, 0x2b}, {0x26b6029?, 0xc00097fb00?, 0x4a8da40?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00013ab40)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00013ab40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 6 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0007c3480)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2847 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006c8d00, 0x313a3e0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2866
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1182 [select, 97 minutes]:
net/http.(*persistConn).readLoop(0xc0006d7680)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1180
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 183 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000bfb620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2977 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c6f040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c6f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c6f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c6f040, 0xc0006a2500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 53 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 52
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2788 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0019fe000, {0x265b689?, 0x55127c?}, 0xc00148c1b0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0019fe000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0019fe000, 0x313a1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 184 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000075b40, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4190 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4189
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 127 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 126
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 126 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9b60, 0xc000060060}, 0xc0000abf50, 0xc0000abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9b60, 0xc000060060}, 0x60?, 0xc0000abf50, 0xc0000abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9b60?, 0xc000060060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc00029b500?, 0xc000613560?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3077 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c6f520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c6f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c6f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c6f520, 0xc0006a3380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3563 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc001910a90, 0x4)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001465500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001910ac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001599420, {0x3695ae0, 0xc000880ed0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001599420, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3619
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3434 [chan receive]:
testing.(*T).Run(0xc000c6e680, {0x268172a?, 0x60400000004?}, 0xc0004a4080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000c6e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000c6e680, 0xc000644f00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2931
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2930 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0006c9a00, {0x265cc34?, 0x0?}, 0xc00169c280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006c9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006c9a00, 0xc000075d80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2847
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 227 [select, 112 minutes]:
net/http.(*persistConn).readLoop(0xc0015a0120)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 261
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 4189 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9b60, 0xc000060060}, 0xc00145c750, 0xc00145c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9b60, 0xc000060060}, 0xe0?, 0xc00145c750, 0xc00145c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9b60?, 0xc000060060?}, 0x99b656?, 0xc000229380?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00145c7d0?, 0x592e44?, 0xc001d041e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4212
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4212 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0016a6740, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4210
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3618 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001465620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3606
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3575 [chan receive, 1 minutes]:
testing.(*T).Run(0xc00138e680, {0x268172a?, 0x60400000004?}, 0xc001fa2080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00138e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00138e680, 0xc00169c280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2930
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 193 [select, 112 minutes]:
net/http.(*persistConn).readLoop(0xc00141c480)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 205
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 228 [select, 112 minutes]:
net/http.(*persistConn).writeLoop(0xc0015a0120)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 261
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 125 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000075690, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000bfb500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000075b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000536e40, {0x3695ae0, 0xc0008de930}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000536e40, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3493 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000917090, 0x4)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002041080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009170c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0015cf090, {0x3695ae0, 0xc001ebe3f0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0015cf090, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3512
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 773 [IO wait, 101 minutes]:
internal/poll.runtime_pollWait(0x7fe8dc55bbb8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x13?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0004a4200)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0004a4200)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0001ca960)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0001ca960)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007f80f0, {0x36ac9c0, 0xc0001ca960})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0007f80f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x592e44?, 0xc0006c96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 770
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 226 [select, 112 minutes]:
net/http.(*persistConn).writeLoop(0xc00141c480)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 205
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2866 [chan receive, 32 minutes]:
testing.(*T).Run(0xc0019feea0, {0x265b689?, 0x551133?}, 0x313a3e0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0019feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0019feea0, 0x313a208)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4210 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b99a0, 0xc00003a770}, {0x36ad080, 0xc00075eca0}, 0x1, 0x0, 0xc000b79c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b99a0?, 0xc0000281c0?}, 0x3b9aca00, 0xc000665e10?, 0x1, 0xc000665c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b99a0, 0xc0000281c0}, 0xc000820680, {0xc0017c6498, 0x16}, {0x26816c6, 0x14}, {0x2699286, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b99a0, 0xc0000281c0}, 0xc000820680, {0xc0017c6498, 0x16}, {0x2672a04?, 0xc000a5d760?}, {0x551133?, 0x4a170f?}, {0xc000a6a000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000820680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000820680, 0xc000224880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3448
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3511 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020411a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3507
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 940 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9b60, 0xc000060060}, 0xc000a57750, 0xc001473f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9b60, 0xc000060060}, 0xc0?, 0xc000a57750, 0xc000a57798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9b60?, 0xc000060060?}, 0xc0006c8d00?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000a577d0?, 0x592e44?, 0xc00151d2c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 877
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 876 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001958660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 939 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0008d0d10, 0x27)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001958420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008d0d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001af3350, {0x3695ae0, 0xc000124c60}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001af3350, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 877
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3955 [IO wait]:
internal/poll.runtime_pollWait(0x7fe8dc55b7d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014d1b80?, 0xc0017f4000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014d1b80, {0xc0017f4000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0014d1b80, {0xc0017f4000?, 0xc00157bcc0?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc00065c6d0, {0xc0017f4000?, 0xc0017f405f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc0020829a8, {0xc0017f4000?, 0x0?, 0xc0020829a8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0007597b0, {0x3696280, 0xc0020829a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000759508, {0x7fe8dc3e5e08, 0xc000af6018}, 0xc002103980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000759508, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc000759508, {0xc00199d000, 0x1000, 0xc0017fda40?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0019cd8c0, {0xc0006602e0, 0x9, 0x498cc30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3694760, 0xc0019cd8c0}, {0xc0006602e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0006602e0, 0x9, 0x2103dc0?}, {0x3694760?, 0xc0019cd8c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006602a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc002103fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0017c4780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3954
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3448 [chan receive, 8 minutes]:
testing.(*T).Run(0xc00138e340, {0x2687442?, 0x60400000004?}, 0xc000224880)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00138e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00138e340, 0xc000224580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2848
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 877 [chan receive, 99 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008d0d40, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 973 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a6af00, 0xc000182b40)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 813
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2976 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000c6eea0, 0xc00148c1b0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2788
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3495 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3494
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 951 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc00029bc80, 0xc00151d920)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 950
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2848 [chan receive, 27 minutes]:
testing.(*T).Run(0xc0006c9520, {0x265cc34?, 0x0?}, 0xc000224580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006c9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006c9520, 0xc0000750c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2847
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3041 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c75a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c75a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c75a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c75a00, 0xc000224980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3564 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9b60, 0xc000060060}, 0xc00145df50, 0xc00145df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9b60, 0xc000060060}, 0x60?, 0xc00145df50, 0xc00145df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9b60?, 0xc000060060?}, 0x6db57a?, 0x7b8e18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00145dfd0?, 0x592e44?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3619
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3565 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3564
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 941 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 940
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2931 [chan receive, 27 minutes]:
testing.(*T).Run(0xc0006c9ba0, {0x265cc34?, 0x0?}, 0xc000644f00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006c9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006c9ba0, 0xc000a30040)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2847
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2849 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006c9860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006c9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006c9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0006c9860, 0xc000075cc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2847
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3619 [chan receive, 21 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001910ac0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3606
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1138 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a6aa80, 0xc001d04d80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1137
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3040 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c75860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c75860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c75860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c75860, 0xc000224900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3079 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c6f860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c6f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c6f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c6f860, 0xc0006a3580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1183 [select, 97 minutes]:
net/http.(*persistConn).writeLoop(0xc0006d7680)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1180
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2933 [chan receive, 24 minutes]:
testing.(*T).Run(0xc000c6e4e0, {0x265cc34?, 0x0?}, 0xc000645080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000c6e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000c6e4e0, 0xc000a30340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2847
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4188 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0016a6710, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001e61320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0016a6740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0015e6aa0, {0x3695ae0, 0xc0016c4480}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0015e6aa0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4212
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3512 [chan receive, 24 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009170c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3507
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 6211 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b99a0, 0xc00033d0a0}, {0x36ad080, 0xc001396240}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b99a0?, 0xc0000361c0?}, 0x3b9aca00, 0xc000b75d38?, 0x1, 0xc000b75b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b99a0, 0xc0000361c0}, 0xc000821380, {0xc0007fc0c0, 0x1c}, {0x26816c6, 0x14}, {0x2699286, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36b99a0, 0xc0000361c0}, 0xc000821380, {0xc0007fc0c0, 0x1c}, {0x26845c0?, 0xc0014bff60?}, {0x551133?, 0x4a170f?}, {0xc000546800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000821380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000821380, 0xc001fa2080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3575
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3078 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c6f6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c6f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c6f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c6f6c0, 0xc0006a3400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3494 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9b60, 0xc000060060}, 0xc001833750, 0xc000b55f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9b60, 0xc000060060}, 0x80?, 0xc001833750, 0xc001833798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9b60?, 0xc000060060?}, 0xc0006c8820?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0018337d0?, 0x592e44?, 0xc00143e480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3512
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3080 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000761720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000c6fa00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000c6fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000c6fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000c6fa00, 0xc0006a3600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2976
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3700 [IO wait]:
internal/poll.runtime_pollWait(0x7fe8dc0c2640, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0004a4800?, 0xc000a2b800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0004a4800, {0xc000a2b800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0004a4800, {0xc000a2b800?, 0x7fe8dc0c3f08?, 0xc001531290?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc00065c020, {0xc000a2b800?, 0xc000b6a938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001531290, {0xc000a2b800?, 0x0?, 0xc001531290?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0002b1b30, {0x3696280, 0xc001531290})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0002b1888, {0x3695660, 0xc00065c020}, 0xc000b6a980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0002b1888, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0002b1888, {0xc000886000, 0x1000, 0xc0017fd180?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0014643c0, {0xc0008422e0, 0x9, 0x498cc30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3694760, 0xc0014643c0}, {0xc0008422e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0008422e0, 0x9, 0xb6adc0?}, {0x3694760?, 0xc0014643c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0008422a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000b6afa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001584000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3699
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 4211 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001e617a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 4210
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3408 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000c6ed00, {0x268172a?, 0x60400000004?}, 0xc0014d0080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000c6ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000c6ed00, 0xc000645080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2933
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 6564 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b9930, 0xc000053680}, {0x36ad080, 0xc001396860}, 0x1, 0x0, 0xc000669b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b99a0?, 0xc0000380e0?}, 0x3b9aca00, 0xc000669d38?, 0x1, 0xc000669b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b99a0, 0xc0000380e0}, 0xc000821520, {0xc00005eac8, 0x11}, {0x26816c6, 0x14}, {0x2699286, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36b99a0, 0xc0000380e0}, 0xc000821520, {0xc00005eac8, 0x11}, {0x266685a?, 0xc0014be760?}, {0x551133?, 0x4a170f?}, {0xc000546600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000821520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000821520, 0xc0004a4080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3434
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 5796 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b9930, 0xc0001701e0}, {0x36ad080, 0xc00152ed80}, 0x1, 0x0, 0xc000665b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b99a0?, 0xc0000381c0?}, 0x3b9aca00, 0xc0013edd38?, 0x1, 0xc0013edb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b99a0, 0xc0000381c0}, 0xc000821040, {0xc00155c288, 0x12}, {0x26816c6, 0x14}, {0x2699286, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36b99a0, 0xc0000381c0}, 0xc000821040, {0xc00155c288, 0x12}, {0x2668a68?, 0xc0014c1f60?}, {0x551133?, 0x4a170f?}, {0xc000546700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000821040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000821040, 0xc0014d0080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3408
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3799 [IO wait]:
internal/poll.runtime_pollWait(0x7fe8dc55b4f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00169d580?, 0xc000635000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00169d580, {0xc000635000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00169d580, {0xc000635000?, 0xc00019ef00?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000738b10, {0xc000635000?, 0xc00063505f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001531320, {0xc000635000?, 0x0?, 0xc001531320?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00022beb0, {0x3696280, 0xc001531320})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00022bc08, {0x7fe8dc3e5e08, 0xc0020828d0}, 0xc00088e980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00022bc08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00022bc08, {0xc000c71000, 0x1000, 0xc00145b180?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001485b00, {0xc001478740, 0x9, 0x498cc30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3694760, 0xc001485b00}, {0xc001478740, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001478740, 0x9, 0x88edc0?}, {0x3694760?, 0xc001485b00?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001478700)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00088efa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00029b500)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3798
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                    

Test pass (182/233)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 45.96
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 26.67
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 44.11
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.59
31 TestOffline 63.96
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 426.54
40 TestAddons/serial/GCPAuth/Namespaces 32.17
42 TestAddons/parallel/Registry 15.72
44 TestAddons/parallel/InspektorGadget 11.28
46 TestAddons/parallel/HelmTiller 19.78
48 TestAddons/parallel/CSI 57.15
49 TestAddons/parallel/Headlamp 19.71
50 TestAddons/parallel/CloudSpanner 5.58
51 TestAddons/parallel/LocalPath 12.09
52 TestAddons/parallel/NvidiaDevicePlugin 5.5
53 TestAddons/parallel/Yakd 10.92
55 TestCertOptions 64.91
56 TestCertExpiration 364.77
58 TestForceSystemdFlag 65.74
59 TestForceSystemdEnv 74.7
61 TestKVMDriverInstallOrUpdate 4.55
65 TestErrorSpam/setup 45.97
66 TestErrorSpam/start 0.35
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.54
69 TestErrorSpam/unpause 1.56
70 TestErrorSpam/stop 4.83
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 56.15
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.31
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.62
82 TestFunctional/serial/CacheCmd/cache/add_local 2.44
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 33.86
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.37
93 TestFunctional/serial/LogsFileCmd 1.36
94 TestFunctional/serial/InvalidService 4.39
96 TestFunctional/parallel/ConfigCmd 0.35
97 TestFunctional/parallel/DashboardCmd 14.55
98 TestFunctional/parallel/DryRun 0.36
99 TestFunctional/parallel/InternationalLanguage 0.17
100 TestFunctional/parallel/StatusCmd 1.16
104 TestFunctional/parallel/ServiceCmdConnect 8.5
105 TestFunctional/parallel/AddonsCmd 0.14
106 TestFunctional/parallel/PersistentVolumeClaim 26.97
108 TestFunctional/parallel/SSHCmd 0.47
109 TestFunctional/parallel/CpCmd 1.41
111 TestFunctional/parallel/FileSync 0.25
112 TestFunctional/parallel/CertSync 1.58
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
120 TestFunctional/parallel/License 1.05
121 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
123 TestFunctional/parallel/ProfileCmd/profile_list 0.28
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
125 TestFunctional/parallel/MountCmd/any-port 9.76
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.53
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
135 TestFunctional/parallel/ImageCommands/ImageBuild 3.33
136 TestFunctional/parallel/ImageCommands/Setup 1.79
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.75
141 TestFunctional/parallel/MountCmd/specific-port 1.65
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.45
144 TestFunctional/parallel/ServiceCmd/List 0.52
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
148 TestFunctional/parallel/ServiceCmd/Format 0.6
149 TestFunctional/parallel/ServiceCmd/URL 0.34
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 210.08
167 TestMultiControlPlane/serial/DeployApp 6.45
168 TestMultiControlPlane/serial/PingHostFromPods 1.2
169 TestMultiControlPlane/serial/AddWorkerNode 55.82
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
172 TestMultiControlPlane/serial/CopyFile 12.98
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.29
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
185 TestJSONOutput/start/Command 65.84
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.63
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.6
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.34
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.19
213 TestMainNoArgs 0.04
214 TestMinikubeProfile 85.54
217 TestMountStart/serial/StartWithMountFirst 29.69
218 TestMountStart/serial/VerifyMountFirst 0.38
219 TestMountStart/serial/StartWithMountSecond 24.38
220 TestMountStart/serial/VerifyMountSecond 0.37
221 TestMountStart/serial/DeleteFirst 0.7
222 TestMountStart/serial/VerifyMountPostDelete 0.37
223 TestMountStart/serial/Stop 1.27
224 TestMountStart/serial/RestartStopped 23.53
225 TestMountStart/serial/VerifyMountPostStop 0.38
228 TestMultiNode/serial/FreshStart2Nodes 118.05
229 TestMultiNode/serial/DeployApp2Nodes 5.03
230 TestMultiNode/serial/PingHostFrom2Pods 0.79
231 TestMultiNode/serial/AddNode 48.29
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.22
234 TestMultiNode/serial/CopyFile 7.24
235 TestMultiNode/serial/StopNode 2.25
236 TestMultiNode/serial/StartAfterStop 39.02
238 TestMultiNode/serial/DeleteNode 2.43
240 TestMultiNode/serial/RestartMultiNode 180.33
241 TestMultiNode/serial/ValidateNameConflict 41.03
248 TestScheduledStopUnix 113.61
252 TestRunningBinaryUpgrade 193.89
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestNoKubernetes/serial/StartWithK8s 97.37
267 TestPause/serial/Start 106.44
268 TestNoKubernetes/serial/StartWithStopK8s 42.21
269 TestNoKubernetes/serial/Start 26.77
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
271 TestNoKubernetes/serial/ProfileList 24.37
272 TestPause/serial/SecondStartNoReconfiguration 36.98
273 TestNoKubernetes/serial/Stop 1.32
274 TestNoKubernetes/serial/StartNoArgs 22.92
275 TestStoppedBinaryUpgrade/Setup 2.31
276 TestStoppedBinaryUpgrade/Upgrade 134.05
277 TestPause/serial/Pause 0.68
278 TestPause/serial/VerifyStatus 0.25
279 TestPause/serial/Unpause 0.59
280 TestPause/serial/PauseAgain 0.71
281 TestPause/serial/DeletePaused 0.96
282 TestPause/serial/VerifyDeletedResources 0.33
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
x
+
TestDownloadOnly/v1.20.0/json-events (45.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-633867 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-633867 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (45.963366408s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (45.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-633867
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-633867: exit status 85 (60.464071ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-633867 | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC |          |
	|         | -p download-only-633867        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:04:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:04:05.309590  502396 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:04:05.309874  502396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:04:05.309885  502396 out.go:304] Setting ErrFile to fd 2...
	I0730 00:04:05.309890  502396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:04:05.310107  502396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	W0730 00:04:05.310238  502396 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19346-495103/.minikube/config/config.json: open /home/jenkins/minikube-integration/19346-495103/.minikube/config/config.json: no such file or directory
	I0730 00:04:05.310867  502396 out.go:298] Setting JSON to true
	I0730 00:04:05.311903  502396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6387,"bootTime":1722291458,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:04:05.311975  502396 start.go:139] virtualization: kvm guest
	I0730 00:04:05.314619  502396 out.go:97] [download-only-633867] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0730 00:04:05.314751  502396 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball: no such file or directory
	I0730 00:04:05.314819  502396 notify.go:220] Checking for updates...
	I0730 00:04:05.316304  502396 out.go:169] MINIKUBE_LOCATION=19346
	I0730 00:04:05.317825  502396 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:04:05.319215  502396 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:04:05.320812  502396 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:04:05.322308  502396 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0730 00:04:05.324988  502396 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0730 00:04:05.325225  502396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:04:05.358257  502396 out.go:97] Using the kvm2 driver based on user configuration
	I0730 00:04:05.358287  502396 start.go:297] selected driver: kvm2
	I0730 00:04:05.358294  502396 start.go:901] validating driver "kvm2" against <nil>
	I0730 00:04:05.358638  502396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:04:05.358750  502396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:04:05.375167  502396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:04:05.375237  502396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 00:04:05.376039  502396 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0730 00:04:05.376289  502396 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0730 00:04:05.376366  502396 cni.go:84] Creating CNI manager for ""
	I0730 00:04:05.376380  502396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:04:05.376391  502396 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0730 00:04:05.376558  502396 start.go:340] cluster config:
	{Name:download-only-633867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-633867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:04:05.377104  502396 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:04:05.379189  502396 out.go:97] Downloading VM boot image ...
	I0730 00:04:05.379224  502396 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0730 00:04:14.733147  502396 out.go:97] Starting "download-only-633867" primary control-plane node in "download-only-633867" cluster
	I0730 00:04:14.733184  502396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 00:04:14.835088  502396 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0730 00:04:14.835130  502396 cache.go:56] Caching tarball of preloaded images
	I0730 00:04:14.835305  502396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 00:04:14.837514  502396 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0730 00:04:14.837535  502396 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:04:14.939560  502396 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0730 00:04:26.361985  502396 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:04:26.362090  502396 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:04:27.413070  502396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0730 00:04:27.413444  502396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/download-only-633867/config.json ...
	I0730 00:04:27.413476  502396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/download-only-633867/config.json: {Name:mk73bec5d3b3e7011f0da9f1f95c387cbcf3525d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:04:27.413660  502396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 00:04:27.413827  502396 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-633867 host does not exist
	  To start a cluster, run: "minikube start -p download-only-633867"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-633867
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (26.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-800416 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-800416 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.66948797s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (26.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-800416
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-800416: exit status 85 (61.464308ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-633867 | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC |                     |
	|         | -p download-only-633867        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC | 30 Jul 24 00:04 UTC |
	| delete  | -p download-only-633867        | download-only-633867 | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC | 30 Jul 24 00:04 UTC |
	| start   | -o=json --download-only        | download-only-800416 | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC |                     |
	|         | -p download-only-800416        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:04:51
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:04:51.599960  502715 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:04:51.600113  502715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:04:51.600124  502715 out.go:304] Setting ErrFile to fd 2...
	I0730 00:04:51.600130  502715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:04:51.600334  502715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:04:51.600998  502715 out.go:298] Setting JSON to true
	I0730 00:04:51.602701  502715 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6434,"bootTime":1722291458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:04:51.602782  502715 start.go:139] virtualization: kvm guest
	I0730 00:04:51.604739  502715 out.go:97] [download-only-800416] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:04:51.604911  502715 notify.go:220] Checking for updates...
	I0730 00:04:51.606166  502715 out.go:169] MINIKUBE_LOCATION=19346
	I0730 00:04:51.607638  502715 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:04:51.608934  502715 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:04:51.610294  502715 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:04:51.611641  502715 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0730 00:04:51.613936  502715 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0730 00:04:51.614236  502715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:04:51.646913  502715 out.go:97] Using the kvm2 driver based on user configuration
	I0730 00:04:51.646954  502715 start.go:297] selected driver: kvm2
	I0730 00:04:51.646960  502715 start.go:901] validating driver "kvm2" against <nil>
	I0730 00:04:51.647336  502715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:04:51.647461  502715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:04:51.664182  502715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:04:51.664241  502715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 00:04:51.664865  502715 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0730 00:04:51.665034  502715 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0730 00:04:51.665097  502715 cni.go:84] Creating CNI manager for ""
	I0730 00:04:51.665113  502715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:04:51.665126  502715 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0730 00:04:51.665192  502715 start.go:340] cluster config:
	{Name:download-only-800416 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-800416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:04:51.665312  502715 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:04:51.666978  502715 out.go:97] Starting "download-only-800416" primary control-plane node in "download-only-800416" cluster
	I0730 00:04:51.667004  502715 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:04:52.544779  502715 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:04:52.544832  502715 cache.go:56] Caching tarball of preloaded images
	I0730 00:04:52.544988  502715 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 00:04:52.546946  502715 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0730 00:04:52.546966  502715 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:04:52.649249  502715 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0730 00:05:16.556734  502715 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:05:16.556840  502715 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-800416 host does not exist
	  To start a cluster, run: "minikube start -p download-only-800416"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-800416
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (44.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-232646 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-232646 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (44.11383132s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (44.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-232646
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-232646: exit status 85 (63.115807ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-633867 | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC |                     |
	|         | -p download-only-633867             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC | 30 Jul 24 00:04 UTC |
	| delete  | -p download-only-633867             | download-only-633867 | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC | 30 Jul 24 00:04 UTC |
	| start   | -o=json --download-only             | download-only-800416 | jenkins | v1.33.1 | 30 Jul 24 00:04 UTC |                     |
	|         | -p download-only-800416             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 30 Jul 24 00:05 UTC | 30 Jul 24 00:05 UTC |
	| delete  | -p download-only-800416             | download-only-800416 | jenkins | v1.33.1 | 30 Jul 24 00:05 UTC | 30 Jul 24 00:05 UTC |
	| start   | -o=json --download-only             | download-only-232646 | jenkins | v1.33.1 | 30 Jul 24 00:05 UTC |                     |
	|         | -p download-only-232646             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 00:05:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 00:05:18.595560  502983 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:05:18.595669  502983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:05:18.595674  502983 out.go:304] Setting ErrFile to fd 2...
	I0730 00:05:18.595678  502983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:05:18.595854  502983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:05:18.596453  502983 out.go:298] Setting JSON to true
	I0730 00:05:18.597484  502983 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6461,"bootTime":1722291458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:05:18.597550  502983 start.go:139] virtualization: kvm guest
	I0730 00:05:18.599893  502983 out.go:97] [download-only-232646] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:05:18.600119  502983 notify.go:220] Checking for updates...
	I0730 00:05:18.601676  502983 out.go:169] MINIKUBE_LOCATION=19346
	I0730 00:05:18.603036  502983 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:05:18.604382  502983 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:05:18.605633  502983 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:05:18.606918  502983 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0730 00:05:18.609494  502983 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0730 00:05:18.609767  502983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:05:18.642647  502983 out.go:97] Using the kvm2 driver based on user configuration
	I0730 00:05:18.642686  502983 start.go:297] selected driver: kvm2
	I0730 00:05:18.642694  502983 start.go:901] validating driver "kvm2" against <nil>
	I0730 00:05:18.643148  502983 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:05:18.643237  502983 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19346-495103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0730 00:05:18.658999  502983 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0730 00:05:18.659057  502983 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 00:05:18.659581  502983 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0730 00:05:18.659734  502983 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0730 00:05:18.659796  502983 cni.go:84] Creating CNI manager for ""
	I0730 00:05:18.659808  502983 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0730 00:05:18.659818  502983 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0730 00:05:18.659879  502983 start.go:340] cluster config:
	{Name:download-only-232646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-232646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:05:18.659973  502983 iso.go:125] acquiring lock: {Name:mk34d12b9a2ed8a2e277788b456b0df4d8f0feeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 00:05:18.661886  502983 out.go:97] Starting "download-only-232646" primary control-plane node in "download-only-232646" cluster
	I0730 00:05:18.661912  502983 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 00:05:19.092245  502983 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0730 00:05:19.092295  502983 cache.go:56] Caching tarball of preloaded images
	I0730 00:05:19.092465  502983 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 00:05:19.094320  502983 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0730 00:05:19.094339  502983 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:05:19.192021  502983 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0730 00:05:28.561221  502983 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:05:28.561327  502983 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19346-495103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0730 00:05:29.410260  502983 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0730 00:05:29.410682  502983 profile.go:143] Saving config to /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/download-only-232646/config.json ...
	I0730 00:05:29.410720  502983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/download-only-232646/config.json: {Name:mk399ba4201efdb1fc437e4feecfc2dff641213d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 00:05:29.410922  502983 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 00:05:29.411101  502983 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19346-495103/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-232646 host does not exist
	  To start a cluster, run: "minikube start -p download-only-232646"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-232646
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-248146 --alsologtostderr --binary-mirror http://127.0.0.1:38989 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-248146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-248146
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (63.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-539887 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-539887 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m3.083392752s)
helpers_test.go:175: Cleaning up "offline-crio-539887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-539887
--- PASS: TestOffline (63.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-091578
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-091578: exit status 85 (55.056468ms)

                                                
                                                
-- stdout --
	* Profile "addons-091578" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-091578"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-091578
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-091578: exit status 85 (56.29651ms)

                                                
                                                
-- stdout --
	* Profile "addons-091578" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-091578"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (426.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-091578 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-091578 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m6.540014855s)
--- PASS: TestAddons/Setup (426.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (32.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-091578 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-091578 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-091578 get secret gcp-auth -n new-namespace: exit status 1 (88.763365ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth
addons_test.go:662: (dbg) Non-zero exit: kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth: exit status 1 (83.606763ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gcp-auth" in pod "gcp-auth-5db96cd9b4-5cxwj" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:670: (dbg) Run:  kubectl --context addons-091578 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-091578 get secret gcp-auth -n new-namespace: exit status 1 (63.971274ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth
addons_test.go:662: (dbg) Non-zero exit: kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth: exit status 1 (67.423827ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gcp-auth" in pod "gcp-auth-5db96cd9b4-5cxwj" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:670: (dbg) Run:  kubectl --context addons-091578 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-091578 get secret gcp-auth -n new-namespace: exit status 1 (63.208079ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth
addons_test.go:662: (dbg) Non-zero exit: kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth: exit status 1 (89.937468ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gcp-auth" in pod "gcp-auth-5db96cd9b4-5cxwj" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:670: (dbg) Run:  kubectl --context addons-091578 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-091578 get secret gcp-auth -n new-namespace: exit status 1 (61.81664ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth
addons_test.go:662: (dbg) Non-zero exit: kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth: exit status 1 (71.745874ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gcp-auth" in pod "gcp-auth-5db96cd9b4-5cxwj" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:670: (dbg) Run:  kubectl --context addons-091578 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-091578 get secret gcp-auth -n new-namespace: exit status 1 (66.169064ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth
addons_test.go:662: (dbg) Non-zero exit: kubectl --context addons-091578 logs -l app=gcp-auth -n gcp-auth: exit status 1 (62.019046ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gcp-auth" in pod "gcp-auth-5db96cd9b4-5cxwj" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:670: (dbg) Run:  kubectl --context addons-091578 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (32.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.734604ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-mczh9" [99907a0e-3d47-408f-b8ea-3725dee9f03b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005646611s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nqxzf" [613243a6-ea19-4999-ad5f-ca96c8e11bfd] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005732577s
addons_test.go:342: (dbg) Run:  kubectl --context addons-091578 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-091578 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-091578 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.877451252s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 ip
2024/07/30 00:14:09 [DEBUG] GET http://192.168.39.214:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x5xmm" [279040f4-8d6d-414e-824c-91b2e90676b4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004582864s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-091578
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-091578: (6.27261632s)
--- PASS: TestAddons/parallel/InspektorGadget (11.28s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.78s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.369449ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-7kxlp" [e02f9185-5b7f-40f5-baf0-64a0c45bc97e] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004717086s
addons_test.go:475: (dbg) Run:  kubectl --context addons-091578 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-091578 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (14.19124979s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (19.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.489385ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-091578 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-091578 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [40715b3a-7231-4590-b128-9e08eeae16e0] Pending
helpers_test.go:344: "task-pv-pod" [40715b3a-7231-4590-b128-9e08eeae16e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [40715b3a-7231-4590-b128-9e08eeae16e0] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.004122229s
addons_test.go:590: (dbg) Run:  kubectl --context addons-091578 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-091578 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-091578 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-091578 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-091578 delete pod task-pv-pod: (1.205498165s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-091578 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-091578 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-091578 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4c332b1b-6725-4ef3-99fa-31ee6204a88d] Pending
helpers_test.go:344: "task-pv-pod-restore" [4c332b1b-6725-4ef3-99fa-31ee6204a88d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4c332b1b-6725-4ef3-99fa-31ee6204a88d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004045547s
addons_test.go:632: (dbg) Run:  kubectl --context addons-091578 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-091578 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-091578 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-091578 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-zzwld" [452af676-9869-4ca2-969b-ba3c82b5319d] Pending
helpers_test.go:344: "headlamp-7867546754-zzwld" [452af676-9869-4ca2-969b-ba3c82b5319d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-zzwld" [452af676-9869-4ca2-969b-ba3c82b5319d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004510799s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-091578 addons disable headlamp --alsologtostderr -v=1: (5.735149147s)
--- PASS: TestAddons/parallel/Headlamp (19.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-llr2p" [129f14c2-e1fb-4535-a816-f436f3dd4f53] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00586896s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-091578
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-091578 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-091578 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fcec4a14-2869-4a9e-be98-9cf279810bf7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fcec4a14-2869-4a9e-be98-9cf279810bf7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fcec4a14-2869-4a9e-be98-9cf279810bf7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003978504s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-091578 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 ssh "cat /opt/local-path-provisioner/pvc-f03646c2-17c5-467c-9078-e8eb4c5ef372_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-091578 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-091578 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ln654" [f07b96ab-d52e-45d8-9c29-00c89fc8619e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005781805s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-091578
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-hz2sc" [171e3b87-fbdd-4a10-bb11-5ce1667e3d26] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005397034s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-091578 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-091578 addons disable yakd --alsologtostderr -v=1: (5.914487795s)
--- PASS: TestAddons/parallel/Yakd (10.92s)

                                                
                                    
x
+
TestCertOptions (64.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-398469 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-398469 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m3.636233797s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-398469 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-398469 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-398469 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-398469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-398469
--- PASS: TestCertOptions (64.91s)

                                                
                                    
x
+
TestCertExpiration (364.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-050894 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0730 01:36:10.082891  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-050894 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m14.706334712s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-050894 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-050894 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m49.042812829s)
helpers_test.go:175: Cleaning up "cert-expiration-050894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-050894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-050894: (1.014886137s)
--- PASS: TestCertExpiration (364.77s)

                                                
                                    
x
+
TestForceSystemdFlag (65.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-452226 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-452226 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.459790169s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-452226 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-452226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-452226
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-452226: (1.074871688s)
--- PASS: TestForceSystemdFlag (65.74s)

                                                
                                    
x
+
TestForceSystemdEnv (74.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-191803 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-191803 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.686126002s)
helpers_test.go:175: Cleaning up "force-systemd-env-191803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-191803
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-191803: (1.010798124s)
--- PASS: TestForceSystemdEnv (74.70s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.55s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.55s)

                                                
                                    
x
+
TestErrorSpam/setup (45.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-921154 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-921154 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-921154 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-921154 --driver=kvm2  --container-runtime=crio: (45.973178194s)
--- PASS: TestErrorSpam/setup (45.97s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (4.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 stop: (1.521308115s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 stop
E0730 00:23:42.934542  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:42.940698  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:42.951020  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:42.971331  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:43.011681  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:43.092082  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:43.252554  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 stop: (1.576052946s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 stop
E0730 00:23:43.573711  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:44.214717  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-921154 --log_dir /tmp/nospam-921154 stop: (1.729772991s)
--- PASS: TestErrorSpam/stop (4.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19346-495103/.minikube/files/etc/test/nested/copy/502384/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-844183 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0730 00:23:48.056623  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:23:53.177479  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:24:03.418525  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:24:23.898782  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-844183 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.149794571s)
--- PASS: TestFunctional/serial/StartWithProxy (56.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-844183 --alsologtostderr -v=8
E0730 00:25:04.859957  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-844183 --alsologtostderr -v=8: (37.309581434s)
functional_test.go:659: soft start took 37.310444545s for "functional-844183" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-844183 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 cache add registry.k8s.io/pause:3.1: (1.5131249s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 cache add registry.k8s.io/pause:3.3: (1.538795858s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 cache add registry.k8s.io/pause:latest: (1.56838898s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-844183 /tmp/TestFunctionalserialCacheCmdcacheadd_local882476579/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cache add minikube-local-cache-test:functional-844183
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 cache add minikube-local-cache-test:functional-844183: (2.110872098s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cache delete minikube-local-cache-test:functional-844183
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-844183
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.0545ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 cache reload: (1.300073527s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 kubectl -- --context functional-844183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-844183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-844183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-844183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.854673928s)
functional_test.go:757: restart took 33.854794569s for "functional-844183" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-844183 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 logs: (1.373206463s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 logs --file /tmp/TestFunctionalserialLogsFileCmd1899914121/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 logs --file /tmp/TestFunctionalserialLogsFileCmd1899914121/001/logs.txt: (1.360908279s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-844183 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-844183
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-844183: exit status 115 (277.611894ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.57:31232 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-844183 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 config get cpus: exit status 14 (52.603821ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 config get cpus: exit status 14 (53.243834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-844183 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-844183 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 512931: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-844183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-844183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.13843ms)

                                                
                                                
-- stdout --
	* [functional-844183] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19346
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:26:11.494396  512436 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:26:11.494516  512436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:26:11.494534  512436 out.go:304] Setting ErrFile to fd 2...
	I0730 00:26:11.494541  512436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:26:11.494812  512436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:26:11.495430  512436 out.go:298] Setting JSON to false
	I0730 00:26:11.496513  512436 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7713,"bootTime":1722291458,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:26:11.496615  512436 start.go:139] virtualization: kvm guest
	I0730 00:26:11.498203  512436 out.go:177] * [functional-844183] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0730 00:26:11.499732  512436 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:26:11.499753  512436 notify.go:220] Checking for updates...
	I0730 00:26:11.502229  512436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:26:11.503701  512436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:26:11.505027  512436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:26:11.506387  512436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:26:11.508311  512436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:26:11.510548  512436 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:26:11.511162  512436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:26:11.511245  512436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:26:11.533464  512436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I0730 00:26:11.533973  512436 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:26:11.534657  512436 main.go:141] libmachine: Using API Version  1
	I0730 00:26:11.534687  512436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:26:11.535192  512436 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:26:11.535462  512436 main.go:141] libmachine: (functional-844183) Calling .DriverName
	I0730 00:26:11.535833  512436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:26:11.536299  512436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:26:11.536347  512436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:26:11.558986  512436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I0730 00:26:11.559829  512436 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:26:11.560543  512436 main.go:141] libmachine: Using API Version  1
	I0730 00:26:11.560568  512436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:26:11.561006  512436 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:26:11.561225  512436 main.go:141] libmachine: (functional-844183) Calling .DriverName
	I0730 00:26:11.610339  512436 out.go:177] * Using the kvm2 driver based on existing profile
	I0730 00:26:11.611998  512436 start.go:297] selected driver: kvm2
	I0730 00:26:11.612031  512436 start.go:901] validating driver "kvm2" against &{Name:functional-844183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-844183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:26:11.612185  512436 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:26:11.614804  512436 out.go:177] 
	W0730 00:26:11.616268  512436 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0730 00:26:11.617600  512436 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-844183 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-844183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-844183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.822279ms)

                                                
                                                
-- stdout --
	* [functional-844183] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19346
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 00:26:11.335963  512397 out.go:291] Setting OutFile to fd 1 ...
	I0730 00:26:11.336085  512397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:26:11.336094  512397 out.go:304] Setting ErrFile to fd 2...
	I0730 00:26:11.336098  512397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 00:26:11.336403  512397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 00:26:11.336969  512397 out.go:298] Setting JSON to false
	I0730 00:26:11.338028  512397 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7713,"bootTime":1722291458,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0730 00:26:11.338111  512397 start.go:139] virtualization: kvm guest
	I0730 00:26:11.340345  512397 out.go:177] * [functional-844183] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0730 00:26:11.341644  512397 out.go:177]   - MINIKUBE_LOCATION=19346
	I0730 00:26:11.341663  512397 notify.go:220] Checking for updates...
	I0730 00:26:11.343111  512397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 00:26:11.344663  512397 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	I0730 00:26:11.345925  512397 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	I0730 00:26:11.347553  512397 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0730 00:26:11.348741  512397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 00:26:11.350668  512397 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 00:26:11.351324  512397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:26:11.351408  512397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:26:11.371792  512397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43927
	I0730 00:26:11.372277  512397 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:26:11.372948  512397 main.go:141] libmachine: Using API Version  1
	I0730 00:26:11.372983  512397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:26:11.373486  512397 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:26:11.373753  512397 main.go:141] libmachine: (functional-844183) Calling .DriverName
	I0730 00:26:11.374079  512397 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 00:26:11.374584  512397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 00:26:11.374635  512397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 00:26:11.396823  512397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37453
	I0730 00:26:11.397343  512397 main.go:141] libmachine: () Calling .GetVersion
	I0730 00:26:11.397875  512397 main.go:141] libmachine: Using API Version  1
	I0730 00:26:11.397897  512397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 00:26:11.398220  512397 main.go:141] libmachine: () Calling .GetMachineName
	I0730 00:26:11.398441  512397 main.go:141] libmachine: (functional-844183) Calling .DriverName
	I0730 00:26:11.438131  512397 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0730 00:26:11.439755  512397 start.go:297] selected driver: kvm2
	I0730 00:26:11.439779  512397 start.go:901] validating driver "kvm2" against &{Name:functional-844183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-844183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 00:26:11.439948  512397 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 00:26:11.443555  512397 out.go:177] 
	W0730 00:26:11.445310  512397 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0730 00:26:11.446497  512397 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-844183 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-844183 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-b2fr6" [8d46f017-50a1-4470-ad17-61bbe14c5e20] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
2024/07/30 00:26:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "hello-node-connect-57b4589c47-b2fr6" [8d46f017-50a1-4470-ad17-61bbe14c5e20] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004702748s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.57:31333
functional_test.go:1671: http://192.168.39.57:31333: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-b2fr6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.57:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.57:31333
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [000534bf-3966-4eda-8ffa-62739142ff82] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003864866s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-844183 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-844183 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-844183 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-844183 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d9b83700-27de-4cac-a5c0-4b6384d1f1b0] Pending
helpers_test.go:344: "sp-pod" [d9b83700-27de-4cac-a5c0-4b6384d1f1b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d9b83700-27de-4cac-a5c0-4b6384d1f1b0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005159192s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-844183 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-844183 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-844183 delete -f testdata/storage-provisioner/pod.yaml: (1.211518554s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-844183 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f5516c93-260b-44cf-ac2c-58f5f448201a] Pending
helpers_test.go:344: "sp-pod" [f5516c93-260b-44cf-ac2c-58f5f448201a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f5516c93-260b-44cf-ac2c-58f5f448201a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004373413s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-844183 exec sp-pod -- ls /tmp/mount
E0730 00:28:42.934441  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:29:10.620927  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 00:33:42.934109  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh -n functional-844183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cp functional-844183:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1812378793/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh -n functional-844183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh -n functional-844183 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/502384/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo cat /etc/test/nested/copy/502384/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/502384.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo cat /etc/ssl/certs/502384.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/502384.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo cat /usr/share/ca-certificates/502384.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/5023842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo cat /etc/ssl/certs/5023842.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/5023842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo cat /usr/share/ca-certificates/5023842.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-844183 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh "sudo systemctl is-active docker": exit status 1 (261.940767ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh "sudo systemctl is-active containerd": exit status 1 (311.999612ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-linux-amd64 license: (1.053702083s)
--- PASS: TestFunctional/parallel/License (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-844183 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-844183 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-sqjzk" [7ac4d09c-be2e-464e-a4b7-60990a45b5ec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-sqjzk" [7ac4d09c-be2e-464e-a4b7-60990a45b5ec] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.005211336s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "228.642322ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "48.664894ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "294.715598ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "51.48869ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdany-port831945929/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722299170924609645" to /tmp/TestFunctionalparallelMountCmdany-port831945929/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722299170924609645" to /tmp/TestFunctionalparallelMountCmdany-port831945929/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722299170924609645" to /tmp/TestFunctionalparallelMountCmdany-port831945929/001/test-1722299170924609645
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.972358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 30 00:26 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 30 00:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 30 00:26 test-1722299170924609645
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh cat /mount-9p/test-1722299170924609645
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-844183 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b56fa44c-8542-4671-84ec-0f98a9e490b8] Pending
helpers_test.go:344: "busybox-mount" [b56fa44c-8542-4671-84ec-0f98a9e490b8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b56fa44c-8542-4671-84ec-0f98a9e490b8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b56fa44c-8542-4671-84ec-0f98a9e490b8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.007861331s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-844183 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdany-port831945929/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 version -o=json --components
E0730 00:26:26.780522  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-844183 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-844183
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-844183
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-844183 image ls --format short --alsologtostderr:
I0730 00:26:27.393421  514208 out.go:291] Setting OutFile to fd 1 ...
I0730 00:26:27.393556  514208 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:27.393565  514208 out.go:304] Setting ErrFile to fd 2...
I0730 00:26:27.393569  514208 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:27.393750  514208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
I0730 00:26:27.394344  514208 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:27.394487  514208 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:27.394832  514208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:27.394885  514208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:27.411087  514208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
I0730 00:26:27.411574  514208 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:27.412202  514208 main.go:141] libmachine: Using API Version  1
I0730 00:26:27.412225  514208 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:27.412665  514208 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:27.413923  514208 main.go:141] libmachine: (functional-844183) Calling .GetState
I0730 00:26:27.415801  514208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:27.415843  514208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:27.431788  514208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36507
I0730 00:26:27.432230  514208 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:27.432851  514208 main.go:141] libmachine: Using API Version  1
I0730 00:26:27.432880  514208 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:27.433210  514208 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:27.433405  514208 main.go:141] libmachine: (functional-844183) Calling .DriverName
I0730 00:26:27.433588  514208 ssh_runner.go:195] Run: systemctl --version
I0730 00:26:27.433618  514208 main.go:141] libmachine: (functional-844183) Calling .GetSSHHostname
I0730 00:26:27.435968  514208 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:27.436422  514208 main.go:141] libmachine: (functional-844183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:38:9f", ip: ""} in network mk-functional-844183: {Iface:virbr1 ExpiryTime:2024-07-30 01:23:59 +0000 UTC Type:0 Mac:52:54:00:82:38:9f Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-844183 Clientid:01:52:54:00:82:38:9f}
I0730 00:26:27.436452  514208 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined IP address 192.168.39.57 and MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:27.436610  514208 main.go:141] libmachine: (functional-844183) Calling .GetSSHPort
I0730 00:26:27.436802  514208 main.go:141] libmachine: (functional-844183) Calling .GetSSHKeyPath
I0730 00:26:27.436960  514208 main.go:141] libmachine: (functional-844183) Calling .GetSSHUsername
I0730 00:26:27.437605  514208 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/functional-844183/id_rsa Username:docker}
I0730 00:26:27.515269  514208 ssh_runner.go:195] Run: sudo crictl images --output json
I0730 00:26:27.547966  514208 main.go:141] libmachine: Making call to close driver server
I0730 00:26:27.547980  514208 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:27.548338  514208 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:27.548351  514208 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:27.548392  514208 main.go:141] libmachine: Making call to close connection to plugin binary
I0730 00:26:27.548406  514208 main.go:141] libmachine: Making call to close driver server
I0730 00:26:27.548415  514208 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:27.548663  514208 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:27.548678  514208 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:27.548687  514208 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-844183 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kicbase/echo-server           | functional-844183  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-844183  | 9e66c4d5b9571 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-844183  | f73c23df7b25f | 1.47MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-844183 image ls --format table --alsologtostderr:
I0730 00:26:31.483111  514415 out.go:291] Setting OutFile to fd 1 ...
I0730 00:26:31.483375  514415 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:31.483383  514415 out.go:304] Setting ErrFile to fd 2...
I0730 00:26:31.483388  514415 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:31.483576  514415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
I0730 00:26:31.484148  514415 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:31.484245  514415 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:31.484582  514415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:31.484624  514415 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:31.499856  514415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
I0730 00:26:31.500396  514415 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:31.500972  514415 main.go:141] libmachine: Using API Version  1
I0730 00:26:31.500992  514415 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:31.501323  514415 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:31.501493  514415 main.go:141] libmachine: (functional-844183) Calling .GetState
I0730 00:26:31.503241  514415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:31.503287  514415 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:31.518282  514415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
I0730 00:26:31.518697  514415 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:31.519177  514415 main.go:141] libmachine: Using API Version  1
I0730 00:26:31.519206  514415 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:31.519582  514415 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:31.519806  514415 main.go:141] libmachine: (functional-844183) Calling .DriverName
I0730 00:26:31.520025  514415 ssh_runner.go:195] Run: systemctl --version
I0730 00:26:31.520048  514415 main.go:141] libmachine: (functional-844183) Calling .GetSSHHostname
I0730 00:26:31.522780  514415 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:31.523177  514415 main.go:141] libmachine: (functional-844183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:38:9f", ip: ""} in network mk-functional-844183: {Iface:virbr1 ExpiryTime:2024-07-30 01:23:59 +0000 UTC Type:0 Mac:52:54:00:82:38:9f Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-844183 Clientid:01:52:54:00:82:38:9f}
I0730 00:26:31.523213  514415 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined IP address 192.168.39.57 and MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:31.523288  514415 main.go:141] libmachine: (functional-844183) Calling .GetSSHPort
I0730 00:26:31.523468  514415 main.go:141] libmachine: (functional-844183) Calling .GetSSHKeyPath
I0730 00:26:31.523637  514415 main.go:141] libmachine: (functional-844183) Calling .GetSSHUsername
I0730 00:26:31.523810  514415 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/functional-844183/id_rsa Username:docker}
I0730 00:26:31.598884  514415 ssh_runner.go:195] Run: sudo crictl images --output json
I0730 00:26:31.634404  514415 main.go:141] libmachine: Making call to close driver server
I0730 00:26:31.634420  514415 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:31.634710  514415 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:31.634729  514415 main.go:141] libmachine: Making call to close connection to plugin binary
I0730 00:26:31.634740  514415 main.go:141] libmachine: Making call to close driver server
I0730 00:26:31.634731  514415 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:31.634748  514415 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:31.634984  514415 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:31.635014  514415 main.go:141] libmachine: Making call to close connection to plugin binary
I0730 00:26:31.635017  514415 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-844183 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-844183"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9e66c4d5b9571ef9ea9962a40fbd79
3b2ddd13bd7a528e22c9939741e5d75f09","repoDigests":["localhost/minikube-local-cache-test@sha256:239db331b65fea40b1ff9036e3eae43cd963211d56dbb877f4bc524a50eb7a38"],"repoTags":["localhost/minikube-local-cache-test:functional-844183"],"size":"3328"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed755
4e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"f73c23df7b25f582e0c9f9de7c839f27a6cea105cc0c3735d636f9416b3c7e38","repoDigests":["localhost/my-image@sha256:5e928d46cc7842e8d6ee09baf59022083124e1e296b99160fa022ca53c64dcd4"],"repoTags":["localhost/my-image:functional-844183"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a1
70315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"47d71d7fd363b81678b83fef3c906a0bcada483f3dcb566dce69e5a168f53d44","repoDigests":["docker.io/library/7e09865d7b654327a9b7ac0338f5a5e3b906ff1605ee026baf02eb1ea6c06f4b-tmp@sha256:2c638bb7415cf4d596e5dc8c9ed8f50e61934ec071e828a2cd2a839a132ad562"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f7
09a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags"
:["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d
128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-844183 image ls --format json --alsologtostderr:
I0730 00:26:31.276441  514391 out.go:291] Setting OutFile to fd 1 ...
I0730 00:26:31.276739  514391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:31.276751  514391 out.go:304] Setting ErrFile to fd 2...
I0730 00:26:31.276757  514391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:31.276951  514391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
I0730 00:26:31.277568  514391 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:31.277694  514391 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:31.278095  514391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:31.278152  514391 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:31.293194  514391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
I0730 00:26:31.293748  514391 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:31.294395  514391 main.go:141] libmachine: Using API Version  1
I0730 00:26:31.294425  514391 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:31.294758  514391 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:31.294957  514391 main.go:141] libmachine: (functional-844183) Calling .GetState
I0730 00:26:31.296584  514391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:31.296633  514391 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:31.311822  514391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
I0730 00:26:31.312246  514391 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:31.312694  514391 main.go:141] libmachine: Using API Version  1
I0730 00:26:31.312725  514391 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:31.313016  514391 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:31.313160  514391 main.go:141] libmachine: (functional-844183) Calling .DriverName
I0730 00:26:31.313421  514391 ssh_runner.go:195] Run: systemctl --version
I0730 00:26:31.313458  514391 main.go:141] libmachine: (functional-844183) Calling .GetSSHHostname
I0730 00:26:31.316108  514391 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:31.316528  514391 main.go:141] libmachine: (functional-844183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:38:9f", ip: ""} in network mk-functional-844183: {Iface:virbr1 ExpiryTime:2024-07-30 01:23:59 +0000 UTC Type:0 Mac:52:54:00:82:38:9f Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-844183 Clientid:01:52:54:00:82:38:9f}
I0730 00:26:31.316561  514391 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined IP address 192.168.39.57 and MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:31.316729  514391 main.go:141] libmachine: (functional-844183) Calling .GetSSHPort
I0730 00:26:31.316883  514391 main.go:141] libmachine: (functional-844183) Calling .GetSSHKeyPath
I0730 00:26:31.317048  514391 main.go:141] libmachine: (functional-844183) Calling .GetSSHUsername
I0730 00:26:31.317203  514391 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/functional-844183/id_rsa Username:docker}
I0730 00:26:31.394680  514391 ssh_runner.go:195] Run: sudo crictl images --output json
I0730 00:26:31.433849  514391 main.go:141] libmachine: Making call to close driver server
I0730 00:26:31.433864  514391 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:31.434202  514391 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:31.434215  514391 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:31.434234  514391 main.go:141] libmachine: Making call to close connection to plugin binary
I0730 00:26:31.434244  514391 main.go:141] libmachine: Making call to close driver server
I0730 00:26:31.434254  514391 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:31.434535  514391 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:31.434569  514391 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:31.434578  514391 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-844183 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-844183
size: "4943877"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9e66c4d5b9571ef9ea9962a40fbd793b2ddd13bd7a528e22c9939741e5d75f09
repoDigests:
- localhost/minikube-local-cache-test@sha256:239db331b65fea40b1ff9036e3eae43cd963211d56dbb877f4bc524a50eb7a38
repoTags:
- localhost/minikube-local-cache-test:functional-844183
size: "3328"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-844183 image ls --format yaml --alsologtostderr:
I0730 00:26:27.695357  514231 out.go:291] Setting OutFile to fd 1 ...
I0730 00:26:27.695487  514231 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:27.695494  514231 out.go:304] Setting ErrFile to fd 2...
I0730 00:26:27.695500  514231 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:27.696146  514231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
I0730 00:26:27.696813  514231 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:27.696921  514231 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:27.697271  514231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:27.697309  514231 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:27.712571  514231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36983
I0730 00:26:27.713119  514231 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:27.713796  514231 main.go:141] libmachine: Using API Version  1
I0730 00:26:27.713831  514231 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:27.714209  514231 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:27.714412  514231 main.go:141] libmachine: (functional-844183) Calling .GetState
I0730 00:26:27.716557  514231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:27.716598  514231 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:27.731606  514231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
I0730 00:26:27.732117  514231 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:27.732732  514231 main.go:141] libmachine: Using API Version  1
I0730 00:26:27.732771  514231 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:27.733098  514231 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:27.733280  514231 main.go:141] libmachine: (functional-844183) Calling .DriverName
I0730 00:26:27.733601  514231 ssh_runner.go:195] Run: systemctl --version
I0730 00:26:27.733670  514231 main.go:141] libmachine: (functional-844183) Calling .GetSSHHostname
I0730 00:26:27.736719  514231 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:27.737159  514231 main.go:141] libmachine: (functional-844183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:38:9f", ip: ""} in network mk-functional-844183: {Iface:virbr1 ExpiryTime:2024-07-30 01:23:59 +0000 UTC Type:0 Mac:52:54:00:82:38:9f Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-844183 Clientid:01:52:54:00:82:38:9f}
I0730 00:26:27.737185  514231 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined IP address 192.168.39.57 and MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:27.737391  514231 main.go:141] libmachine: (functional-844183) Calling .GetSSHPort
I0730 00:26:27.737562  514231 main.go:141] libmachine: (functional-844183) Calling .GetSSHKeyPath
I0730 00:26:27.737722  514231 main.go:141] libmachine: (functional-844183) Calling .GetSSHUsername
I0730 00:26:27.737847  514231 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/functional-844183/id_rsa Username:docker}
I0730 00:26:27.839921  514231 ssh_runner.go:195] Run: sudo crictl images --output json
I0730 00:26:27.891522  514231 main.go:141] libmachine: Making call to close driver server
I0730 00:26:27.891537  514231 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:27.891875  514231 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:27.891879  514231 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:27.891906  514231 main.go:141] libmachine: Making call to close connection to plugin binary
I0730 00:26:27.891915  514231 main.go:141] libmachine: Making call to close driver server
I0730 00:26:27.891931  514231 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:27.892173  514231 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:27.892178  514231 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:27.892212  514231 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh pgrep buildkitd: exit status 1 (220.370093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image build -t localhost/my-image:functional-844183 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 image build -t localhost/my-image:functional-844183 testdata/build --alsologtostderr: (2.90956794s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-844183 image build -t localhost/my-image:functional-844183 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 47d71d7fd36
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-844183
--> f73c23df7b2
Successfully tagged localhost/my-image:functional-844183
f73c23df7b25f582e0c9f9de7c839f27a6cea105cc0c3735d636f9416b3c7e38
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-844183 image build -t localhost/my-image:functional-844183 testdata/build --alsologtostderr:
I0730 00:26:28.165337  514286 out.go:291] Setting OutFile to fd 1 ...
I0730 00:26:28.165501  514286 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:28.165515  514286 out.go:304] Setting ErrFile to fd 2...
I0730 00:26:28.165521  514286 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 00:26:28.165709  514286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
I0730 00:26:28.166355  514286 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:28.167025  514286 config.go:182] Loaded profile config "functional-844183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 00:26:28.167615  514286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:28.167667  514286 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:28.183684  514286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
I0730 00:26:28.184143  514286 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:28.184743  514286 main.go:141] libmachine: Using API Version  1
I0730 00:26:28.184765  514286 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:28.185103  514286 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:28.185315  514286 main.go:141] libmachine: (functional-844183) Calling .GetState
I0730 00:26:28.187078  514286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0730 00:26:28.187118  514286 main.go:141] libmachine: Launching plugin server for driver kvm2
I0730 00:26:28.208042  514286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
I0730 00:26:28.208569  514286 main.go:141] libmachine: () Calling .GetVersion
I0730 00:26:28.209156  514286 main.go:141] libmachine: Using API Version  1
I0730 00:26:28.209191  514286 main.go:141] libmachine: () Calling .SetConfigRaw
I0730 00:26:28.209570  514286 main.go:141] libmachine: () Calling .GetMachineName
I0730 00:26:28.209763  514286 main.go:141] libmachine: (functional-844183) Calling .DriverName
I0730 00:26:28.209993  514286 ssh_runner.go:195] Run: systemctl --version
I0730 00:26:28.210019  514286 main.go:141] libmachine: (functional-844183) Calling .GetSSHHostname
I0730 00:26:28.212793  514286 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:28.213196  514286 main.go:141] libmachine: (functional-844183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:38:9f", ip: ""} in network mk-functional-844183: {Iface:virbr1 ExpiryTime:2024-07-30 01:23:59 +0000 UTC Type:0 Mac:52:54:00:82:38:9f Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-844183 Clientid:01:52:54:00:82:38:9f}
I0730 00:26:28.213219  514286 main.go:141] libmachine: (functional-844183) DBG | domain functional-844183 has defined IP address 192.168.39.57 and MAC address 52:54:00:82:38:9f in network mk-functional-844183
I0730 00:26:28.213359  514286 main.go:141] libmachine: (functional-844183) Calling .GetSSHPort
I0730 00:26:28.213542  514286 main.go:141] libmachine: (functional-844183) Calling .GetSSHKeyPath
I0730 00:26:28.213704  514286 main.go:141] libmachine: (functional-844183) Calling .GetSSHUsername
I0730 00:26:28.213866  514286 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/functional-844183/id_rsa Username:docker}
I0730 00:26:28.294749  514286 build_images.go:161] Building image from path: /tmp/build.1712660967.tar
I0730 00:26:28.294825  514286 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0730 00:26:28.303957  514286 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1712660967.tar
I0730 00:26:28.307698  514286 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1712660967.tar: stat -c "%s %y" /var/lib/minikube/build/build.1712660967.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1712660967.tar': No such file or directory
I0730 00:26:28.307736  514286 ssh_runner.go:362] scp /tmp/build.1712660967.tar --> /var/lib/minikube/build/build.1712660967.tar (3072 bytes)
I0730 00:26:28.331342  514286 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1712660967
I0730 00:26:28.340012  514286 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1712660967 -xf /var/lib/minikube/build/build.1712660967.tar
I0730 00:26:28.348642  514286 crio.go:315] Building image: /var/lib/minikube/build/build.1712660967
I0730 00:26:28.348723  514286 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-844183 /var/lib/minikube/build/build.1712660967 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0730 00:26:31.001276  514286 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-844183 /var/lib/minikube/build/build.1712660967 --cgroup-manager=cgroupfs: (2.652521745s)
I0730 00:26:31.001360  514286 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1712660967
I0730 00:26:31.012324  514286 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1712660967.tar
I0730 00:26:31.022061  514286 build_images.go:217] Built localhost/my-image:functional-844183 from /tmp/build.1712660967.tar
I0730 00:26:31.022099  514286 build_images.go:133] succeeded building to: functional-844183
I0730 00:26:31.022103  514286 build_images.go:134] failed building to: 
I0730 00:26:31.022129  514286 main.go:141] libmachine: Making call to close driver server
I0730 00:26:31.022141  514286 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:31.022480  514286 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:31.022501  514286 main.go:141] libmachine: Making call to close connection to plugin binary
I0730 00:26:31.022500  514286 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:31.022511  514286 main.go:141] libmachine: Making call to close driver server
I0730 00:26:31.022536  514286 main.go:141] libmachine: (functional-844183) Calling .Close
I0730 00:26:31.022781  514286 main.go:141] libmachine: (functional-844183) DBG | Closing plugin on server side
I0730 00:26:31.022828  514286 main.go:141] libmachine: Successfully made call to close driver server
I0730 00:26:31.022846  514286 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.770619885s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-844183
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image load --daemon docker.io/kicbase/echo-server:functional-844183 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image load --daemon docker.io/kicbase/echo-server:functional-844183 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-844183
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image load --daemon docker.io/kicbase/echo-server:functional-844183 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image save docker.io/kicbase/echo-server:functional-844183 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdspecific-port1859182073/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.029203ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdspecific-port1859182073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh "sudo umount -f /mount-9p": exit status 1 (245.163315ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-844183 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdspecific-port1859182073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image rm docker.io/kicbase/echo-server:functional-844183 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-844183 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.196347457s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3224439382/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3224439382/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3224439382/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T" /mount1: exit status 1 (286.983494ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-844183 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3224439382/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3224439382/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-844183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3224439382/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 service list -o json
functional_test.go:1490: Took "518.39365ms" to run "out/minikube-linux-amd64 -p functional-844183 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.57:31299
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.57:31299
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-844183
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-844183 image save --daemon docker.io/kicbase/echo-server:functional-844183 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-844183
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-844183
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-844183
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-844183
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (210.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-161305 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0730 00:38:42.934975  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-161305 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m29.380362245s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (210.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-161305 -- rollout status deployment/busybox: (4.316790949s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-k6rhx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-ttjx8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-v2pq7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-k6rhx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-ttjx8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-v2pq7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-k6rhx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-ttjx8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-v2pq7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-k6rhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-k6rhx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-ttjx8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-ttjx8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-v2pq7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0730 00:40:05.981222  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-161305 -- exec busybox-fc5497c4f-v2pq7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-161305 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-161305 -v=7 --alsologtostderr: (54.968221973s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-161305 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp testdata/cp-test.txt ha-161305:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305:/home/docker/cp-test.txt ha-161305-m02:/home/docker/cp-test_ha-161305_ha-161305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test_ha-161305_ha-161305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305:/home/docker/cp-test.txt ha-161305-m03:/home/docker/cp-test_ha-161305_ha-161305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test_ha-161305_ha-161305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305:/home/docker/cp-test.txt ha-161305-m04:/home/docker/cp-test_ha-161305_ha-161305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test_ha-161305_ha-161305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp testdata/cp-test.txt ha-161305-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m02:/home/docker/cp-test.txt ha-161305:/home/docker/cp-test_ha-161305-m02_ha-161305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test_ha-161305-m02_ha-161305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m02:/home/docker/cp-test.txt ha-161305-m03:/home/docker/cp-test_ha-161305-m02_ha-161305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test_ha-161305-m02_ha-161305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m02:/home/docker/cp-test.txt ha-161305-m04:/home/docker/cp-test_ha-161305-m02_ha-161305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test_ha-161305-m02_ha-161305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp testdata/cp-test.txt ha-161305-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test.txt"
E0730 00:41:10.081885  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:41:10.087167  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:41:10.097478  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:41:10.117797  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m03.txt
E0730 00:41:10.158723  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
E0730 00:41:10.239082  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test.txt"
E0730 00:41:10.399896  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt ha-161305:/home/docker/cp-test_ha-161305-m03_ha-161305.txt
E0730 00:41:10.720991  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test_ha-161305-m03_ha-161305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt ha-161305-m02:/home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt
E0730 00:41:11.362265  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test_ha-161305-m03_ha-161305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m03:/home/docker/cp-test.txt ha-161305-m04:/home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test_ha-161305-m03_ha-161305-m04.txt"
E0730 00:41:12.643232  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp testdata/cp-test.txt ha-161305-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2361062283/001/cp-test_ha-161305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt ha-161305:/home/docker/cp-test_ha-161305-m04_ha-161305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305 "sudo cat /home/docker/cp-test_ha-161305-m04_ha-161305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt ha-161305-m02:/home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m02 "sudo cat /home/docker/cp-test_ha-161305-m04_ha-161305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 cp ha-161305-m04:/home/docker/cp-test.txt ha-161305-m03:/home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt
E0730 00:41:15.204018  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 ssh -n ha-161305-m03 "sudo cat /home/docker/cp-test_ha-161305-m04_ha-161305-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.471723882s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-161305 node delete m03 -v=7 --alsologtostderr: (16.528276948s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-161305 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-977720 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-977720 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.836906177s)
--- PASS: TestJSONOutput/start/Command (65.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-977720 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-977720 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-977720 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-977720 --output=json --user=testUser: (7.342679363s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-637922 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-637922 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.837481ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f50380cd-e0dd-47fd-aaf7-db3721ffcd88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-637922] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f41c8dd3-70b3-4804-89fe-58b360c450b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19346"}}
	{"specversion":"1.0","id":"001f3376-53ce-465a-8dc6-396e7d5d8c62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c468722a-d65f-49ed-bb1b-b09594247de2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig"}}
	{"specversion":"1.0","id":"ac68b434-0e75-47e2-b9fd-417bc00329d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube"}}
	{"specversion":"1.0","id":"b7947985-416f-49e9-8255-6f9be14e671c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f6074811-9eb4-416c-b86b-720150e19d13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14fd1b12-a0b0-4000-8b7b-2a950e7835ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-637922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-637922
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (85.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-844520 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-844520 --driver=kvm2  --container-runtime=crio: (42.708125145s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-846828 --driver=kvm2  --container-runtime=crio
E0730 01:08:42.934728  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-846828 --driver=kvm2  --container-runtime=crio: (40.366979823s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-844520
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-846828
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-846828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-846828
helpers_test.go:175: Cleaning up "first-844520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-844520
--- PASS: TestMinikubeProfile (85.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-750176 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-750176 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.691738633s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-750176 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-750176 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-771421 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-771421 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.38116633s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-771421 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-771421 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-750176 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-771421 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-771421 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-771421
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-771421: (1.274156472s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-771421
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-771421: (22.53088084s)
--- PASS: TestMountStart/serial/RestartStopped (23.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-771421 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-771421 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-543365 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0730 01:11:10.081056  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-543365 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m57.624614099s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-543365 -- rollout status deployment/busybox: (3.5813596s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-t9w48 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-wzz95 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-t9w48 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-wzz95 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-t9w48 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-wzz95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-t9w48 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-t9w48 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-wzz95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-543365 -- exec busybox-fc5497c4f-wzz95 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-543365 -v 3 --alsologtostderr
E0730 01:13:25.983018  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-543365 -v 3 --alsologtostderr: (47.709660499s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-543365 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp testdata/cp-test.txt multinode-543365:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile929286498/001/cp-test_multinode-543365.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365:/home/docker/cp-test.txt multinode-543365-m02:/home/docker/cp-test_multinode-543365_multinode-543365-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m02 "sudo cat /home/docker/cp-test_multinode-543365_multinode-543365-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365:/home/docker/cp-test.txt multinode-543365-m03:/home/docker/cp-test_multinode-543365_multinode-543365-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m03 "sudo cat /home/docker/cp-test_multinode-543365_multinode-543365-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp testdata/cp-test.txt multinode-543365-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile929286498/001/cp-test_multinode-543365-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt multinode-543365:/home/docker/cp-test_multinode-543365-m02_multinode-543365.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365 "sudo cat /home/docker/cp-test_multinode-543365-m02_multinode-543365.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365-m02:/home/docker/cp-test.txt multinode-543365-m03:/home/docker/cp-test_multinode-543365-m02_multinode-543365-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m03 "sudo cat /home/docker/cp-test_multinode-543365-m02_multinode-543365-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp testdata/cp-test.txt multinode-543365-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile929286498/001/cp-test_multinode-543365-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt multinode-543365:/home/docker/cp-test_multinode-543365-m03_multinode-543365.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365 "sudo cat /home/docker/cp-test_multinode-543365-m03_multinode-543365.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 cp multinode-543365-m03:/home/docker/cp-test.txt multinode-543365-m02:/home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 ssh -n multinode-543365-m02 "sudo cat /home/docker/cp-test_multinode-543365-m03_multinode-543365-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-543365 node stop m03: (1.37315171s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-543365 status: exit status 7 (442.504523ms)

                                                
                                                
-- stdout --
	multinode-543365
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-543365-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-543365-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-543365 status --alsologtostderr: exit status 7 (437.355168ms)

                                                
                                                
-- stdout --
	multinode-543365
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-543365-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-543365-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 01:13:39.003790  534491 out.go:291] Setting OutFile to fd 1 ...
	I0730 01:13:39.003901  534491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:13:39.003910  534491 out.go:304] Setting ErrFile to fd 2...
	I0730 01:13:39.003914  534491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 01:13:39.004128  534491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19346-495103/.minikube/bin
	I0730 01:13:39.004302  534491 out.go:298] Setting JSON to false
	I0730 01:13:39.004328  534491 mustload.go:65] Loading cluster: multinode-543365
	I0730 01:13:39.004378  534491 notify.go:220] Checking for updates...
	I0730 01:13:39.004769  534491 config.go:182] Loaded profile config "multinode-543365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 01:13:39.004796  534491 status.go:255] checking status of multinode-543365 ...
	I0730 01:13:39.005287  534491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:13:39.005336  534491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:13:39.021437  534491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I0730 01:13:39.021853  534491 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:13:39.022540  534491 main.go:141] libmachine: Using API Version  1
	I0730 01:13:39.022566  534491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:13:39.022947  534491 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:13:39.023203  534491 main.go:141] libmachine: (multinode-543365) Calling .GetState
	I0730 01:13:39.024860  534491 status.go:330] multinode-543365 host status = "Running" (err=<nil>)
	I0730 01:13:39.024881  534491 host.go:66] Checking if "multinode-543365" exists ...
	I0730 01:13:39.025179  534491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:13:39.025215  534491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:13:39.040658  534491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0730 01:13:39.041102  534491 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:13:39.041567  534491 main.go:141] libmachine: Using API Version  1
	I0730 01:13:39.041590  534491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:13:39.041893  534491 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:13:39.042052  534491 main.go:141] libmachine: (multinode-543365) Calling .GetIP
	I0730 01:13:39.044955  534491 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:13:39.045389  534491 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:13:39.045442  534491 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:13:39.045565  534491 host.go:66] Checking if "multinode-543365" exists ...
	I0730 01:13:39.045874  534491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:13:39.045912  534491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:13:39.061450  534491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0730 01:13:39.061850  534491 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:13:39.062425  534491 main.go:141] libmachine: Using API Version  1
	I0730 01:13:39.062467  534491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:13:39.062781  534491 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:13:39.062989  534491 main.go:141] libmachine: (multinode-543365) Calling .DriverName
	I0730 01:13:39.063196  534491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 01:13:39.063228  534491 main.go:141] libmachine: (multinode-543365) Calling .GetSSHHostname
	I0730 01:13:39.066275  534491 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:13:39.066681  534491 main.go:141] libmachine: (multinode-543365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:72:a5", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:10:51 +0000 UTC Type:0 Mac:52:54:00:24:72:a5 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:multinode-543365 Clientid:01:52:54:00:24:72:a5}
	I0730 01:13:39.066753  534491 main.go:141] libmachine: (multinode-543365) DBG | domain multinode-543365 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:72:a5 in network mk-multinode-543365
	I0730 01:13:39.066809  534491 main.go:141] libmachine: (multinode-543365) Calling .GetSSHPort
	I0730 01:13:39.067264  534491 main.go:141] libmachine: (multinode-543365) Calling .GetSSHKeyPath
	I0730 01:13:39.067453  534491 main.go:141] libmachine: (multinode-543365) Calling .GetSSHUsername
	I0730 01:13:39.067621  534491 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365/id_rsa Username:docker}
	I0730 01:13:39.147943  534491 ssh_runner.go:195] Run: systemctl --version
	I0730 01:13:39.153642  534491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 01:13:39.168127  534491 kubeconfig.go:125] found "multinode-543365" server: "https://192.168.39.235:8443"
	I0730 01:13:39.168164  534491 api_server.go:166] Checking apiserver status ...
	I0730 01:13:39.168206  534491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 01:13:39.183941  534491 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup
	W0730 01:13:39.193839  534491 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0730 01:13:39.193904  534491 ssh_runner.go:195] Run: ls
	I0730 01:13:39.198020  534491 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0730 01:13:39.203131  534491 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0730 01:13:39.203159  534491 status.go:422] multinode-543365 apiserver status = Running (err=<nil>)
	I0730 01:13:39.203172  534491 status.go:257] multinode-543365 status: &{Name:multinode-543365 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 01:13:39.203193  534491 status.go:255] checking status of multinode-543365-m02 ...
	I0730 01:13:39.203493  534491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:13:39.203537  534491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:13:39.219388  534491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0730 01:13:39.219866  534491 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:13:39.220396  534491 main.go:141] libmachine: Using API Version  1
	I0730 01:13:39.220424  534491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:13:39.220807  534491 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:13:39.221006  534491 main.go:141] libmachine: (multinode-543365-m02) Calling .GetState
	I0730 01:13:39.222476  534491 status.go:330] multinode-543365-m02 host status = "Running" (err=<nil>)
	I0730 01:13:39.222496  534491 host.go:66] Checking if "multinode-543365-m02" exists ...
	I0730 01:13:39.222791  534491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:13:39.222846  534491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:13:39.241817  534491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43431
	I0730 01:13:39.242257  534491 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:13:39.242823  534491 main.go:141] libmachine: Using API Version  1
	I0730 01:13:39.242848  534491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:13:39.243207  534491 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:13:39.243414  534491 main.go:141] libmachine: (multinode-543365-m02) Calling .GetIP
	I0730 01:13:39.246777  534491 main.go:141] libmachine: (multinode-543365-m02) DBG | domain multinode-543365-m02 has defined MAC address 52:54:00:1a:b1:ba in network mk-multinode-543365
	I0730 01:13:39.247238  534491 main.go:141] libmachine: (multinode-543365-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:b1:ba", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:12:03 +0000 UTC Type:0 Mac:52:54:00:1a:b1:ba Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-543365-m02 Clientid:01:52:54:00:1a:b1:ba}
	I0730 01:13:39.247274  534491 main.go:141] libmachine: (multinode-543365-m02) DBG | domain multinode-543365-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:1a:b1:ba in network mk-multinode-543365
	I0730 01:13:39.247437  534491 host.go:66] Checking if "multinode-543365-m02" exists ...
	I0730 01:13:39.247734  534491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:13:39.247779  534491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:13:39.263933  534491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39011
	I0730 01:13:39.264392  534491 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:13:39.264947  534491 main.go:141] libmachine: Using API Version  1
	I0730 01:13:39.264976  534491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:13:39.265293  534491 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:13:39.265518  534491 main.go:141] libmachine: (multinode-543365-m02) Calling .DriverName
	I0730 01:13:39.265700  534491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 01:13:39.265719  534491 main.go:141] libmachine: (multinode-543365-m02) Calling .GetSSHHostname
	I0730 01:13:39.268642  534491 main.go:141] libmachine: (multinode-543365-m02) DBG | domain multinode-543365-m02 has defined MAC address 52:54:00:1a:b1:ba in network mk-multinode-543365
	I0730 01:13:39.269130  534491 main.go:141] libmachine: (multinode-543365-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:b1:ba", ip: ""} in network mk-multinode-543365: {Iface:virbr1 ExpiryTime:2024-07-30 02:12:03 +0000 UTC Type:0 Mac:52:54:00:1a:b1:ba Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-543365-m02 Clientid:01:52:54:00:1a:b1:ba}
	I0730 01:13:39.269199  534491 main.go:141] libmachine: (multinode-543365-m02) DBG | domain multinode-543365-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:1a:b1:ba in network mk-multinode-543365
	I0730 01:13:39.269326  534491 main.go:141] libmachine: (multinode-543365-m02) Calling .GetSSHPort
	I0730 01:13:39.269487  534491 main.go:141] libmachine: (multinode-543365-m02) Calling .GetSSHKeyPath
	I0730 01:13:39.269617  534491 main.go:141] libmachine: (multinode-543365-m02) Calling .GetSSHUsername
	I0730 01:13:39.269728  534491 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19346-495103/.minikube/machines/multinode-543365-m02/id_rsa Username:docker}
	I0730 01:13:39.356554  534491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 01:13:39.375537  534491 status.go:257] multinode-543365-m02 status: &{Name:multinode-543365-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0730 01:13:39.375576  534491 status.go:255] checking status of multinode-543365-m03 ...
	I0730 01:13:39.375878  534491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0730 01:13:39.375916  534491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0730 01:13:39.392303  534491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0730 01:13:39.392757  534491 main.go:141] libmachine: () Calling .GetVersion
	I0730 01:13:39.393246  534491 main.go:141] libmachine: Using API Version  1
	I0730 01:13:39.393275  534491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0730 01:13:39.393624  534491 main.go:141] libmachine: () Calling .GetMachineName
	I0730 01:13:39.393844  534491 main.go:141] libmachine: (multinode-543365-m03) Calling .GetState
	I0730 01:13:39.395303  534491 status.go:330] multinode-543365-m03 host status = "Stopped" (err=<nil>)
	I0730 01:13:39.395320  534491 status.go:343] host is not running, skipping remaining checks
	I0730 01:13:39.395328  534491 status.go:257] multinode-543365-m03 status: &{Name:multinode-543365-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 node start m03 -v=7 --alsologtostderr
E0730 01:13:42.934659  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
E0730 01:14:13.128402  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-543365 node start m03 -v=7 --alsologtostderr: (38.383476777s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-543365 node delete m03: (1.907739543s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-543365 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0730 01:23:42.934521  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-543365 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.777037779s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-543365 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-543365
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-543365-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-543365-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.192385ms)

                                                
                                                
-- stdout --
	* [multinode-543365-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19346
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-543365-m02' is duplicated with machine name 'multinode-543365-m02' in profile 'multinode-543365'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-543365-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-543365-m03 --driver=kvm2  --container-runtime=crio: (39.713463697s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-543365
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-543365: exit status 80 (213.779331ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-543365 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-543365-m03 already exists in multinode-543365-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-543365-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.03s)

                                                
                                    
x
+
TestScheduledStopUnix (113.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-405458 --memory=2048 --driver=kvm2  --container-runtime=crio
E0730 01:30:05.983512  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-405458 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.978949243s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-405458 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-405458 -n scheduled-stop-405458
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-405458 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-405458 --cancel-scheduled
E0730 01:30:53.128995  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-405458 -n scheduled-stop-405458
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-405458
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-405458 --schedule 15s
E0730 01:31:10.083951  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/functional-844183/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-405458
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-405458: exit status 7 (65.974192ms)

                                                
                                                
-- stdout --
	scheduled-stop-405458
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-405458 -n scheduled-stop-405458
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-405458 -n scheduled-stop-405458: exit status 7 (63.618962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-405458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-405458
--- PASS: TestScheduledStopUnix (113.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (193.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3263784536 start -p running-upgrade-566823 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3263784536 start -p running-upgrade-566823 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.976096523s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-566823 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-566823 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.861402472s)
helpers_test.go:175: Cleaning up "running-upgrade-566823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-566823
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-566823: (1.230069739s)
--- PASS: TestRunningBinaryUpgrade (193.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545077 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-545077 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.051404ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-545077] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19346
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19346-495103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19346-495103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545077 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545077 --driver=kvm2  --container-runtime=crio: (1m37.106708334s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-545077 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.37s)

                                                
                                    
x
+
TestPause/serial/Start (106.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-030027 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-030027 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.44407349s)
--- PASS: TestPause/serial/Start (106.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545077 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0730 01:33:42.934710  502384 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19346-495103/.minikube/profiles/addons-091578/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545077 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.680235255s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-545077 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-545077 status -o json: exit status 2 (320.874058ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-545077","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-545077
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-545077: (1.207269084s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545077 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545077 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.77199013s)
--- PASS: TestNoKubernetes/serial/Start (26.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-545077 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-545077 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.546936ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (24.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.31526218s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (10.054004988s)
--- PASS: TestNoKubernetes/serial/ProfileList (24.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-030027 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-030027 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.953911342s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-545077
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-545077: (1.321839247s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545077 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545077 --driver=kvm2  --container-runtime=crio: (22.920762148s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (134.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1530665 start -p stopped-upgrade-596705 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1530665 start -p stopped-upgrade-596705 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m8.328452534s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1530665 -p stopped-upgrade-596705 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1530665 -p stopped-upgrade-596705 stop: (1.518919739s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-596705 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-596705 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.197312011s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (134.05s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-030027 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-030027 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-030027 --output=json --layout=cluster: exit status 2 (245.747113ms)

                                                
                                                
-- stdout --
	{"Name":"pause-030027","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-030027","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-030027 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-030027 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.96s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-030027 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.96s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-545077 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-545077 "sudo systemctl is-active --quiet service kubelet": exit status 1 (188.20061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-596705
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    

Test skip (37/233)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
152 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
154 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
157 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
182 TestImageBuild 0
209 TestKicCustomNetwork 0
210 TestKicExistingNetwork 0
211 TestKicCustomSubnet 0
212 TestKicStaticIP 0
244 TestChangeNoneUser 0
247 TestScheduledStopWindows 0
249 TestSkaffold 0
251 TestInsufficientStorage 0
255 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard